Return to Colloquia & Seminar listing
Associative content-addressable networks with exponentially many robust stable states for high-capacity pattern labeling and recognition
Mathematical BiologySpeaker: | Rishi Chaudhuri, UC Davis |
Location: | 2112 MSB |
Start time: | Mon, Jun 3 2019, 3:10PM |
The brain must store very large numbers of memories and then recover them in the presence of noise. The traditional focus of memory models is to store and exactly recall arbitrary patterns. With these requirements, a model network with N neurons can only store O(N) patterns, likely inadequate for the many scenes and events an organism encounters over a lifetime. By contrast, patterns that the brain remembers well are not random, and multiple everyday tasks require recognizing, labeling or otherwise acting on a previously seen input without recalling it completely. In this talk I will present results on the construction of networks that store and robustly retrieve extremely large numbers of structured patterns.
I will first construct a mapping between neural networks and error-correcting codes on expander graphs. Using this mapping, I will show that a canonical model of neural memory (the Hopfield network) with N neurons can possess and efficiently error-correct ~2^N stable states (resolving a long-standing theoretical question). The exponentially-many stable states can be used to construct a robust high-capacity pattern labeler (or locality sensitive hash function), and I will show how this pattern labeler can be used as a building block in a variety of useful neural computations, ranging from recognizing whether an image has been seen before to robustly mapping a large number of stimuli to responses.
I will end by presenting preliminary results showing how sparse feedforward and recurrent networks of neurons might efficiently communicate using expander graph architectures.