ORIE Colloquium: Mark Huber M.S. '97, Ph.D. '99 (Claremont McKenna) - Adaptive Estimation for Monte Carlo Data


Password: 2020


Techniques such as perfect simulation can give unbiased estimates for high-dimensional integrals of interest, but the errors in those integrals remain difficult to bound. Classical estimators for mean and standard deviation usually use a fixed number of samples, and usually, it is not possible to describe the relative error precisely with these estimates. \emph{Adaptive} estimators utilize a random number of samples in creating their estimate, and so are well suited for data coming from Monte Carlo simulations. In this talk, I will present a range of adaptive estimators that have guaranteed bounds on the relative error of the estimate. First, an adaptive estimate for estimating the mean $p$ of a $\{0, 1\}$ random variable where the relative error distribution is independent of $p$. Next, an adaptive extension of the Jerrum, Valiant, and Vazirani self-reducibility method where the result is a simple Poisson rather than the product of binomials. Third, an estimate for $[0, 1]$ random variables that matches the best running time possible to first order. Last, an estimate for random variables $X$ with $\mathbb{E}[X] \leq \alpha \mathsf{SD}(X)$ where $\alpha$ is known that is best possible up to first order.

Mark Huber received his Ph.D. from Cornell in Operations Research and Industrial Engineering in 1999 before taking on an NSF Postdoc at Stanford. He then was at Duke University in the departments of Mathematics and Statistical Sciences where he received an NSF CAREER award. Mark then moved to Claremont McKenna College in California, where he is the Fletcher-Jones Foundation Professor of Mathematics and Statistics and George R. Roberts Fellow. At CMC he has published two texts, one in his research area of Perfect Simulation, and an undergraduate text in Probability. Currently, he is also the Program Director of Data Science at CMC.