PhD Defenses

PHYSICS DISSERTATION DEFENSE: Nora Brackbill

Date
Thu July 2nd 2020, 9:30am

Ph.D. Candidate:  Nora Brackbill
 

Research Advisor:  E.J. Chichilnisky

Date: Thursday, July 2, 2020
Time: 9:30 AM
Zoom Meeting Link: https://stanford.zoom.us/j/93901160328

Zoom Meeting Password: Email mariaf67 [at] stanford.edu (subject: PLEASE%20SEND%20PWD%20FOR%20NORA%20BRACKBILL%20DEFENSE, body: PLEASE%20SEND%20PWD%20FOR%20NORA%20BRACKBILL%20DEFENSE) (mariaf67[at]stanford[dot]edu)


Title: Natural Image Processing in the Primate Retina

Abstract:

The retina is the first stage of the visual system, where light is absorbed and encoded in the spikes of neurons. Information from ~100 million photoreceptors is compressed to the outputs of ~1 million spiking neurons called retinal ganglion cells (RGCs), whose axons form the optic nerve. This is the brain's only source of visual information, yet the way that the retina utilizes this limited bandwidth is not understood, particularly for primate vision. Traditionally, our understanding of the visual information stored in retinal spike trains has emerged from probing how RGCs respond to simple visual stimuli. Compact computational encoding models, in which many layers of complex retinal circuitry are represented by a linear filter known as the receptive field, have been surprisingly successful at predicting RGC responses to these artificial stimuli. However, these models may not fully reveal visual signaling in natural conditions, with the types of complex structured stimuli that we encounter in everyday life, nor do they reveal how complete populations of the ~20 distinct types of primate RGCs, which encode different visual features, collectively represent these natural stimuli. Here, we investigate natural image processing in the primate retina through the complementary lenses of encoding and decoding. Responses of complete populations of hundreds of primate RGCs were recorded simultaneously using large-scale multi-electrode arrays, representing one of the largest data sets of primate retinal responses to natural stimuli. These recordings revealed diverse encoding properties in response to both artificial and natural stimuli, and consistent visual representations across many retinas.

 

First, we demonstrate that a commonly used linear-filter based encoding model is insufficient to explain natural scene responses, and in some cases white noise responses as well. Comparison to a highly flexible neural network encoding model revealed that both rich temporal dynamics and spatial nonlinearities will likely be required to explain retinal processing in natural conditions. 

 

However, comparing the impact of retinal processing on the resulting spike trains does not readily reveal the impact on the conveyed visual message, nor how the population collectively represents natural scenes. Therefore, we used a second, complementary approach: decoding, where the stimulus is inferred from the RGC responses. By projecting neural responses into the common stimulus space of the visual image, decoding enables direct comparison and evaluation of the visual signals transmitted downstream. Linear decoding of natural images from the responses of complete RGC populations revealed that the optimal interpretation of the spikes produced by a RGC – i.e. its visual message – depended not only on its encoding properties, but also on the statistics of natural scenes and the spatial arrangement of neighboring RGCs. These factors enabled smoother decoded images than would be expected from the receptive fields alone. In addition, each cell type conveyed different and largely independent features of the visual scene, consistent with their distinct encoding properties. In particular, the role of ON and OFF cell classes, which primarily respond to increments and decrements of light respectively, was investigated through nonlinear transformations and an interaction model. 

 

This work represents one of the first large scale collections of primate retinal response data using natural stimuli. Testing a commonly used encoding model revealed widespread variability across many recordings, while decoding provided a concrete way to evaluate visual information across cells, cell types, and retinas. Collectively, this work will inform understanding of higher visual processing, and guide the development of future artificial retinas.