Return to Colloquia & Seminar listing
Neural networks can learn any low complexity pattern
Special EventsSpeaker: | Sourav Chatterjee, Stanford University |
Related Webpage: | https://profiles.stanford.edu/sourav-chatterjee |
Location: | 1147 MSB |
Start time: | Thu, May 22 2025, 4:10PM |
I will present recent work showing that feedforward neural networks can, in principle, learn patterns that can be expressed as a short program. An example is as follows. Let N be a large number, and suppose our data consists of a sample of X’s and Y’s, where each X is a randomly chosen number between 1 and N, and the corresponding Y is 1 if X is a prime and 0 if not. The sample size n is negligible compared to N. If we fit a neural network to this data which is “sparsest” in a suitable sense, it turns out that the network will be able to accurately predict if a newly chosen X is a prime or not. This is because the property of being prime can be tested by a short program. This is based on joint work with Tim Sudijono. The talk will be accessible to those with no background in neural networks.
This is a joint Math/Stat Colloquium of 2025. Reception starts at 3:45pm in the 1st floor lobby.