Nicholas H. Nelsen

NSF Graduate Research Fellow and Ph.D. Candidate

California Institute of Technology

About Me

Welcome! I am a fourth year graduate student in the Division of Engineering and Applied Science at Caltech, where I work with my advisor Prof. Andrew M. Stuart. My research interests are in theory and algorithms for high-dimensional scientific and data-driven computation.

My current work is centered on operator learning—regressing, from (noisy) data, operators that map between infinite-dimensional (function) spaces—with application to forward and inverse problems, especially those arising from parametric partial differential equations (PDEs) that model physical systems. To this end, I develop and utilize tools from machine learning, model reduction, numerical analysis, and statistics. Please refer to my curriculum vitae and my publications page to learn more about my background and research experience.

I am fortunate to be supported by a NSF Graduate Research Fellowship. In 2020, I obtained my M.Sc. from Caltech, and before starting doctoral study in the fall of 2018, I worked on Lagrangian particle methods for PDEs as a summer research intern in the Center for Computing Research at Sandia National Laboratories. I obtained my B.Sc. (Mathematics), B.S.M.E., and B.S.A.E. degrees from Oklahoma State University in 2018.

nnelsen [at] caltech [dot] edu

Recent News

  • 2022/09 (Upcoming): I am giving an invited talk about "Scalable Uncertainty Quantification with Random Features" in MS85: Recent Advances in Kernel Methods for Computing and Learning, part of SIAM MDS22 in San Diego, CA. There, I am also co-organizing MS81: Provable Guarantees for Learning Dynamical Systems.

  • 2022/08: I am giving an invited virtual talk about my joint work on operator learning in MS1714: Advances in Scientific Machine Learning for High-Dimensional Many-Query Problems, part of the WCCM--APCOM in Yokohama, Japan.

  • 2022/06 (New): An improved version of my work on linear operator learning is now available on arXiv. In it, three fundamental principles reveal the types of linear operators, types of training data, and types of distribution shift that lead to reduced sample size requirements for supervised learning in infinite dimensions.

  • 2022/05: I am giving an invited virtual talk about "Noisy Linear Operator Learning as an Inverse Problem" in WS3: PDE-constrained Bayesian Inverse Problems, part of the Computational Uncertainty Quantification thematic programme at the Erwin Schrödinger Institute in Vienna, Austria.

  • 2022/01: This year I am co-organizing the Caltech Department of Computing and Mathematical Sciences CMX Student/Postdoc Seminar.