Return to Colloquia & Seminar listing
Memory without feedback or attractors in a neural integrator network
Mathematical BiologySpeaker: | Mark Goldman, UC Davis |
Location: | 2112 MSB |
Start time: | Mon, Feb 25 2008, 2:10PM |
Neural integrators are short-term memory networks that store a running total of their inputs over time. This memory storage is reflected in the persistent activity of single neurons within the network, and the prevailing dogma holds that positive-feedback processes are required for maintaining such persistent activity. Here I show instead how neural integration and the generation of persistent neural activity can be performed through purely feedforward processes, either literally in a feedforward network or by propagating activity in a feedforward manner through different states of a recurrent network. Unlike traditional attractor models of memory storage, the networks maintain persistent firing activity without long-lasting eigenmodes of activity. In this talk, I will show how the feedforward character of such networks can be revealed through an alternative to traditional eigenvalue/eigenvector analysis known as the Schur decomposition, and present neuronal data that may be best described by a network with an underlying feedforward dynamical structure.