Return to Colloquia & Seminar listing
Locally stationary distributions: inference and optimization beyond rapid mixing
ProbabilitySpeaker: | David Wu, UC Berkeley |
Location: | 2112 MSB |
Start time: | Thu, Nov 7 2024, 12:10PM |
Markov chain sampling algorithms such as MCMC are an important algorithmic primitive for inference, optimization, and statistical estimation in high-dimensional settings. To guarantee downstream performance, it suffices for MCMC to converge, or mix to a target distribution. Unfortunately, in many relevant scenarios, the mixing time of canonical MCMC algorithms is exponential in the dimension, rendering this theoretical guarantee impractical. In this talk, we study Markov chains with potentially slow mixing times, based on an analogy to stationary points in optimization. In particular, under mild assumptions, any reversible Markov chain converges to a “locally stationary” distribution in polynomially many steps. Such locally stationary distributions enjoy various properties that make them amenable to analysis in interesting inference and optimization settings. Using this framework, we prove that canonical MCMC algorithms with exponential mixing time can nevertheless recover (1) large independent sets in triangle free graphs and (2) good estimates of the hidden communities in a 2-community stochastic block model. Our analysis reveals a connection between a certain MCMC algorithm and the ubiquitous power iteration for spiked matrix models. Based on joint work with Kuikui Liu (MIT), Sidhanth Mohanty (MIT), Amit Rajaraman (MIT), and Prasad Raghavendra (UC Berkeley) (arXiv:2405.20849)