Nicholas H. Nelsen
Ph.D. Candidate
Amazon AI4Science Fellow
California Institute of Technology
About Me
I am on the academic job market and seek full-time positions with start dates in Fall 2024
Welcome! I am a final-year graduate student in applied mathematics at Caltech, where I am advised by Prof. Andrew M. Stuart. Broadly, my research interests live at the intersection of computational mathematics and statistics. Using rigorous analysis and domain-specific insight, I develop novel data-driven methods for high- and infinite-dimensional problems, establish theoretical guarantees on the reliability and trustworthiness of these methods, and apply them in the physical and data sciences.
Much of my current research involves the design and analysis of efficient machine learning algorithms that are tailor-made for scientific and other types of continuum data. I study ways to achieve better accuracy with fewer training data and develop principled uncertainty quantification techniques for operator learning. My work is motivated by scientific computing tasks that involve complex physical systems or inverse problems, where the data is often heterogeneous, noisy, incomplete, and limited in number. I deploy the methodologies arising from my research in several application areas, including medical imaging, climate modeling, and materials science. Please refer to my curriculum vitae and publications page to learn more about my background and research experience.
I am fortunate to be supported by the Amazon/Caltech AI4Science Fellows Program and formerly by a NSF Graduate Research Fellowship (2018 - 2023). In 2020, I obtained my M.Sc. from Caltech, and before starting doctoral study in the fall of 2018, I worked on Lagrangian particle methods as a summer research intern in the Center for Computing Research at Sandia National Laboratories. I obtained my B.Sc. (Mathematics), B.S.M.E., and B.S.A.E. degrees from Oklahoma State University in 2018.
nnelsen [at] caltech [dot] edu
Recent News
2024/02 (new): My new preprint provides "An operator learning perspective on parameter-to-observable maps" (with Daniel Z. Huang and Margaret Trautner). This work introduces and implements Fourier Neural Mappings, a principled extension of FNOs for learning maps with finite-dimensional inputs and/or outputs. For the task of predicting finite-dimensional quantities of interest (QoIs), a theoretical analysis explores the relative difficulty of full-field operator learning versus end-to-end learning of the QoIs. The accompanying code is publicly available here.
2024/02 (new): I am presenting my spotlight work on function-valued random features in MS146: Learning High-Dimensional Functions: Approximation, Sampling, and Algorithms at the SIAM Conference on Uncertainty Quantification (UQ24) in Trieste, Italy. There, I am also co-organizing a minisymposium on Recent Advances in Scalable Active Learning and Optimal Experimental Design.
2024/02: I presented my work on the "Foundations of Data-Efficient and Uncertainty-Aware Scientific Machine Learning" at the Joint ASE/Oden Institute Seminar at UT Austin and the MAE Colloquium at Cornell University.
2024/01: I gave a talk at the Cornell Scientific Computing and Numerics (SCAN) seminar.