Neuroscience Center, Carolina Institute for Developmental Disabilities & Department of Cell Biology & Physiology University of North Carolina-Chapel Hill
Abstract: The visual system transforms pixel-level data into a perception of the world and our place in it. This computation is complex, and challenging for even state-of-the-art machine vision algorithms. However, efficient solutions have evolved in nature. For example, despite being relatively small mammals, with low spatial resolution vision, mice are adept at visually guided behavior, and have at least six higher visual areas. We are using new technology to measure neuronal activity in multiple higher visual areas simultaneously in mice and explore neural coding distributed across cortical areas. We can measure neuronal activity using two-photon calcium imaging. However, in commercially available systems, the approach has been limited to small fields-of-view, smaller than a single cortical area. To overcome this limitation, we have developed new multiphoton imaging systems that provide panoramic views of the brain, > 12 mm2, covering many cortical areas. We use time multiplexed, flexibly positionable, multi-beam scan engines to simultaneously image in multiple brain areas, and capture neural coding distributed across multiple cortical areas. I will share our recent work on neural coding in higher visual areas, development of this circuitry, and psychophysical measurements of mouse visual perception. We hope this work will help uncover not only the machinery of visually guided behavior in mice, but also computational principles with potential generality.