Shengyang Sun

Structured Inter-domain Inducing Points for Variational Gaussian Processes

Shengyang Sun · University of Toronto

2022-04-04

Video

Sparse variational Gaussian processes adopt inducing points to overcome the computational roadblocks in GP inferences. While standard inducing points are placed within the input domain, inter-domain inducing points enable vast flexibilities by considering the feature domain. In this talk, I will first talk about how we can employ discrete Fourier transform to design structured inter-domain inducing points and achieve substantial computational savings for GP inferences. We demonstrate that the proposed approach provides a high-fidelity variational GP approximation while retaining general applicability. Secondly, I will talk about an interpretation of neural network activations as inter-domain inducing points, which establishes an equivalence between finite-width neural networks and variational Gaussian processes.

Shengyang Sun is a Ph.D. candidate in the Machine Learning group at the University of Toronto. His research focuses on Bayesian machine learning including Gaussian processes and Bayesian deep learning. He aims to leverage Bayesian probabilistic methods to improve the quality, reliability and efficiency of machine learning systems.