In our daily lives, we are constantly engaged in the complex tasks of detecting and recognizing objects. Each day brings the similar, but never-identical sensory experiences. The face of my colleague, whom I may have known for several years and can recognize easily, is never the same. Because the viewing conditions of the moment, the particular facial expressions, and the contexts are never identical, the projected image of his face on my retina and the corresponding activation patterns of the photoreceptor cells in my eyes will be invariably different. Therefore, the seemingly simple and effortless job of recognizing a face turns out to be a very difficult computational task. Our brain has to find the underlying constancy, such as the identity of a person, from the continually-changing and never-constant streams of sensory inputs. Our brain solves this problem (and some more) rapidly and reliably at each moment.

How does our brain accomplish this task? Our laboratory studies such marvelous information-processing capability of the cortex, aiming to understand the fundamental biophysical and computational mechanisms in the neural systems.

Our brain is composed of many neurons (roughly 10^12 neurons, more than 100 times the current world population). Each neuron is itself a complex bio-electrochemical device with ionic conductances, capacitance, electric potential difference across the membrane, and a mixture of biochemical molecules. These neurons are interconnected via synapses and communicate with one another by generating electrical pulses and releasing/absorbing neurotransmitters. They form delicate biophysical circuits through which the inputs from the external environment are processed and the reliable outputs are generated.

Different combinations of excitatory and inhibitory synapses can produce various neural response patterns that are observed in neurophysiological experiments. Certain neural response patterns approximate a template-matching operation, which is capable of detecting differences between input patterns. Other neural patterns can implement a competitive-inhibitory operation. Such neural circuits may be the elementary building blocks of larger cortical networks. An important step in understanding our perceptual experiences is to analyze the computational roles of these neural circuits. Some of the research projects in our laboratory are focused on identifying and analyzing the basic units of neural computations and circuitries.

Other research projects deal with the issue of how the brain develops and learns. Recognizing an object and performing perceptual tasks can be regarded as a learning problem, where the brain must infer meaningful correlations and patterns from the vast sensorimotor experiences. Our research projects explore various sets of learning rules that can equip the neural networks to deal effectively with natural input patterns under biophysical constraints (such as energy consumption, spatio-temporal limits, and input resolution).

Because the brain is a highly nonlinear system, studying its complex dynamical behaviors requires quantitative models. Using high performance computers, our laboratory is developing and refining a detailed computational model of the visual system. Using a computational model as an experimental platform, it is possible to study the biophysical processes of neural circuits, to make direct comparisons with existing neurophysiological data, to generate hypotheses for future experiments, and to investigate the essential computational principles and mechanisms in the neural systems. This model may even operate viably as a machine vision application. Building computational models is one of the primary research techniques of our laboratory.

What makes a biological neural system so effective and successful in dealing with the richly complex natural world? The answer to this large and important question is not yet known, but our understanding of the neural mechanisms will be enhanced continuously by neuroscience research, while numerous technological breakthroughs and ideas from the advance of physics, chemistry, biology, mathematics, computer science, psychology and engineering will come together in synergy. Our laboratory hopes to contribute to the collective efforts of understanding the neural mechanisms, which would lead to a better appreciation of what intelligence is and (maybe even) to the ways of enhancing it.

Courses

Related to these research projects, there is an upper-level interdisciplinary course titled “Computational Modeling of Neural Systems” (cross-listed in physics and neuroscience curricula as PHYS/NEURO-111). It covers some of the most exciting developments and challenges in the field of computational neuroscience, while introducing computational modeling techniques and several important ideas from physics and mathematics. Students with different backgrounds and interests are welcome to the course. The students who take this class are majoring in not just physics or neuroscience, but also mathematics, computer science, and/or biochemistry.

Estimating LN Model

Stimulus-Spike Distribution
Stimulus-Spike Distribution

We compare a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.  (Kouh and Sharpee, 2009)

Canonical Neural Circuit

Canonical Neural Circuit
Canonical Neural Circuit for Object Recognition

A few distinct cortical operations have been postulated over the past few years, suggested by experimental data on nonlinear neural response across different areas in the cortex. Among these, the energy model proposes the summation of quadrature pairs following a squaring nonlinearity in order to explain phase invariance of complex V1 cells. The divisive normalization model assumes a gain-controlling, divisive inhibition to explain sigmoid-like response profiles within a pool of neurons. A gaussian-like operation hypothesizes a bell-shaped response tuned to a specific, optimal pattern of activation of the presynaptic inputs. A max-like operation assumes the selection and transmission of the most active response among a set of neural inputs. We propose that these distinct neural operations can be computed by the same canonical circuitry, involving divisive normalization and polynomial nonlinearities, for different parameter values within the circuit. Hence, this canonical circuit may provide a unifying framework for several circuit models, such as the divisive normalization and the energy models. As a case in point, we consider a feedforward hierarchical model of the ventral pathway of the primate visual cortex, which is built on a combination of the gaussian-like and max-like operations. We show that when the two operations are approximated by the circuit proposed here, the model is capable of generating selective and invariant neural responses and performing object recognition, in good agreement with neurophysiological data. (Kouh and Poggio, 2008)