I received my BS in Cognitive Science from Carnegie Mellon University (1994), where I worked on problems in artificial intelligence with Herb Simon. I received my PhD from Caltech (2002), in the lab of Mark Konishi, where I studied the neural basis of learning and motor control in birdsong. Large portions of this work were done at Bell Labs, where I worked with Michael Fee. After leaving Caltech, I did a postdoc with Markus Meister (Harvard University), where I studied the neural circuits underlying motion processing in the amphibian retina. During this time I was a Helen Hay Whitney Fellow and Burroughs Wellcome Fellow.
Since 2008 I have been a Group Leader at Janelia. Our work here focuses on understanding the principles underlying neural information processing -- in short, how neurons collectively solve interesting behavioral problems. These questions are currently pursued in the salamander and the dragonfly, where we are attempting to develop mechanistic descriptions of prey capture in terms of the underlying neural circuit dynamics.
Selected honors include the Lindsley Prize in Behavioral Neuroscience, the Capranica Foundation Prize in Neuroethology, and a Grass Fellowship.
Prior Publications (5)
Zebra finch song is represented in the high-level motor control nucleus high vocal center (HVC) (Reiner et al., 2004) as a sparse sequence of spike bursts. In contrast, the vocal organ is driven continuously by smoothly varying muscle control signals. To investigate how the sparse HVC code is transformed into continuous vocal patterns, we recorded in the singing zebra finch from populations of neurons in the robust nucleus of arcopallium (RA), a premotor area intermediate between HVC and the motor neurons. We found that highly similar song elements are typically produced by different RA ensembles. Furthermore, although the song is modulated on a wide range of time scales (10-100 ms), patterns of neural activity in RA change only on a short time scale (5-10 ms). We suggest that song is driven by a dynamic circuit that operates on a single underlying clock, and that the large convergence of RA neurons to vocal control muscles results in a many-to-one mapping of RA activity to song structure. This permits rapidly changing RA ensembles to drive both fast and slow acoustic modulations, thereby transforming the sparse HVC code into a continuous vocal pattern.
When the dimensionality of a neural circuit is substantially larger than the dimensionality of the variable it encodes, many different degenerate network states can produce the same output. In this review I will discuss three different neural systems that are linked by this theme. The pyloric network of the lobster, the song control system of the zebra finch, and the odor encoding system of the locust, while different in design, all contain degeneracies between their internal parameters and the outputs they encode. Indeed, although the dynamics of song generation and odor identification are quite different, computationally, odor recognition can be thought of as running the song generation circuitry backwards. In both of these systems, degeneracy plays a vital role in mapping a sparse neural representation devoid of correlations onto external stimuli (odors or song structure) that are strongly correlated. I argue that degeneracy between input and output states is an inherent feature of many neural systems, which can be exploited as a fault-tolerant method of reliably learning, generating, and discriminating closely related patterns.
Adult zebra finches require auditory feedback to maintain their songs. It has been proposed that the lateral magnocellular nucleus of the anterior nidopallium (LMAN) mediates song plasticity based on auditory feedback. In this model, neurons in LMAN, tuned to the spectral and temporal properties of the bird's own song (BOS), are thought to compute the difference between the auditory feedback from the bird's vocalizations and an internal song template. This error-correction signal is then used to initiate changes in the motor system that make future vocalizations a better match to the song template. This model was tested by recording from single LMAN neurons while manipulating the auditory feedback heard by singing birds. In contrast to the model predictions, LMAN spike patterns are insensitive to manipulations of auditory feedback. These results suggest that BOS tuning in LMAN is not used for error detection and constrain the nature of any error signal from LMAN to the motor system. Finally, LMAN neurons produce spikes locked precisely to the bird's song, independent of the auditory feedback heard by the bird. This finding suggests that a large portion of the input to this nucleus is from the motor control signals that generate the song rather than from auditory feedback.
The use of chronically implanted electrodes for neural recordings in small, freely behaving animals poses several unique technical challenges. Because of the need for an extremely lightweight apparatus, chronic recording technology has been limited to manually operated microdrives, despite the advantage of motorized manipulators for positioning electrodes. Here we describe a motorized, miniature chronically implantable microdrive for independently positioning three electrodes in the brain. The electrodes are controlled remotely, avoiding the need to disturb the animal during electrode positioning. The microdrive is approximately 6 mm in diameter, 17 mm high and weighs only 1.5 g, including the headstage preamplifier. Use of the motorized microdrive has produced a ten-fold increase in our data yield compared to those experiments done using a manually operated drive. In addition, we are able to record from multiple single neurons in the behaving animal with signal quality comparable to that seen in a head-fixed anesthetized animal. We also describe a motorized commutator that actively tracks animal rotation based on a measurement of torque in the tether.
Young birds learn to sing by using auditory feedback to compare their own vocalizations to a memorized or innate song pattern; if they are deafened as juveniles, they will not develop normal songs. The completion of song development is called crystallization. After this stage, song shows little variation in its temporal or spectral properties. However, the mechanisms underlying this stability are largely unknown. Here we present evidence that auditory feedback is actively used in adulthood to maintain the stability of song structure. We found that perturbing auditory feedback during singing in adult zebra finches caused their song to deteriorate slowly. This 'decrystallization' consisted of a marked loss of the spectral and temporal stereotypy seen in crystallized song, including stuttering, creation, deletion and distortion of song syllables. After normal feedback was restored, these deviations gradually disappeared and the original song was recovered. Thus, adult birds that do not learn new songs nevertheless retain a significant amount of plasticity in the brain.