Date: October 13, 2021 – 3:00pm-4:30pm

Title: Learning what the world looks like from first-person video: challenges and opportunities

Location : Online

Speaker: Michelle Greene

Abstract:

A central tenet of sensory neuroscience is that the visual system is honed through evolution and/or development to have exquisite sensitivity to the statistics of its input. A full account would require us to characterize these statistics. Although work exists on both large- and small-scale datasets, current databases are not tuned to the statistics of visual experience. In this talk, I will introduce the genesis of the Visual Experience Database (VEDB), the largest database of first-person video with head- and eye-tracking. I will then describe two projects that leverage this unique dataset.

Bio:

 Michelle Greene received their Ph.D. at MIT under Aude Oliva exploring the time course of visual scene understanding. After a brief postdoc with Jeremy Wolfe examining visual search in scenes, Dr. Greene joined the Stanford AI Lab to work with Fei-Fei Li at the intersection of human and computer vision. Dr. Greene has been an Assistant Professor of Neuroscience at Bates College since 2017.