How photoswitches encode visual information
Single-cell and multi-electrode array recordings of photoswitch-mediated light responses, in retinas at different stages of degeneration.
Photoswitches — synthetic, light-activated small molecules delivered by intravitreal injection — can confer new light sensitivity on retinas whose photoreceptors have died. Early-phase clinical trials have already shown vision restoration in human subjects with severe retinal degeneration. The next set of questions are not whether the approach works, but what kind of vision it produces and how that compares to native sight.
This thread of the lab is a circuit-level dissection of photoswitch-mediated responses:
- Whole-cell patch clamp of individual retinal neurons (RGCs, bipolar cells, amacrine cells) lets us measure the kinetics, sensitivity, and signal-to-noise of photoswitch-driven currents at the resolution of single cells.
- Multi-electrode array (MEA) recordings capture population-level activity across hundreds of ganglion cells simultaneously — including correlations and ensemble structure that single-cell recordings can’t reveal.
- Machine-learning analysis of those population responses lets us ask quantitative questions about how much visual information is recoverable, and which features of natural scenes survive the encoding.
We perform these recordings in retinas at multiple stages of degeneration, in animal models that recapitulate the circuit changes seen in human disease, so that what we learn translates to the patients who would actually receive these therapies.
Trainees in this thread typically have backgrounds in retinal electrophysiology, computational neuroscience, or both. Replace this placeholder note with specific projects, figures, and citations as the work develops.