2023-04-17 · 16:00 UTC
As scientific experiments and data analysis become increasingly complex and costly, there is a growing need to make inferences and decisions while managing computational costs. In many scientific applications, multiple models of varying fidelity that trade off cost and accuracy are naturally available. In this talk, we discuss two settings where multi-fidelity models can be leveraged. First, we consider Bayesian optimal experimental design, where the goal is to select designs that are maximally informative while balancing its cost. We present a framework for multi-fidelity active Bayesian experimental design using tempering, and we demonstrate its application to optimizing spatial genomics experiments. Next, we consider problems where the goal is to estimate scientific parameters of a system or optimize a design using Markov chain Monte Carlo (MCMC), and the researcher has access to approximate models that become arbitrarily more complex at the cost of more computation. We show how to design multi-fidelity MCMC algorithms that leverage an increasingly complex sequence of lower fidelity models to obtain asymptotically exact estimates and demonstrate the proposed multi-fidelity MCMC approach on several applications, including applications of Gaussian process models.
Diana Cai is a Ph.D. candidate in computer science at Princeton University, where she is advised by Ryan Adams and Barbara Engelhardt. Diana is interested in developing reliable computational tools to guide discovery in scientific applications. Her research spans the areas of machine learning and statistics, with an emphasis on understanding and mitigating model misspecification. Previously, Diana obtained an M.A. in computer science from Princeton University, an M.Sc. in statistics from the University of Chicago, and an A.B. in computer science and statistics from Harvard University.