In 2026, I will join the faculty in UT's Oden Institute and Department of ASE-EM. Prospective Ph.D. students are welcome to contact me by email.
Klarman Fellow
Department of Mathematics
Cornell University
I am joining UT Austin as an Assistant Professor in Fall 2026
Welcome! I am a Klarman Fellow in the Department of Mathematics at Cornell University, where I am hosted by Prof. Alex Townsend and Prof. Yunan Yang. Broadly, my research interests lie at the intersection of computational mathematics and statistics. Using rigorous analysis and domain-specific insight, I develop novel data-driven machine learning methods for high- and infinite-dimensional problems, establish theoretical guarantees on the reliability and trustworthiness of these methods, and apply the methods in the physical and information sciences. My work blends operator learning with ideas from inverse problems, generative modeling, and uncertainty quantification. A current focus of my research centers on data science tasks formulated in the space of probability distributions.
Previously, I was an NSF Postdoc in the Department of Mathematics at MIT. I received my Ph.D. from Caltech in 2024, where I was fortunate to be advised by Prof. Andrew M. Stuart and supported by the Amazon AI4Science Fellows Program and an NSF Graduate Research Fellowship. My doctoral dissertation was awarded two "best thesis" prizes, one in applied mathematics and another in engineering. I obtained my M.Sc. from Caltech in 2020 and my B.Sc. (Mathematics), B.S.M.E., and B.S.A.E. degrees from Oklahoma State University in 2018.
nnelsen [at] cornell [dot] edu
2025/07 (new): I am excited to begin an appointment as a Klarman Fellow in Cornell University's College of Arts & Sciences, where I am hosted by the Department of Mathematics.
2025/06: I finished a productive year as an NSF Postdoc (MSPRF) in MIT Math. Many thanks to Philippe Rigollet and Youssef Marzouk for being excellent faculty hosts!
2025/05 (new): What distribution should training data be sampled from to best approximate a target function or operator? Based on theoretical insights, our preprint on Learning Where to Learn: Training Distribution Selection for Provable Out-of-Distribution Performance introduces two adaptive and target-dependent algorithms to answer this question. The optimal training distributions that result from our methods empirically outperform traditional nonadaptive or target-independent data distributions. This is joint work with Nicolas Guerra and Yunan Yang.
2025/02: Our work on hyperparameter optimization for randomized algorithms is now published in Statistics and Computing.
2024/10 [In the media]: I was featured in an Okstate alumni highlight story about my research career and future plans. Thanks for the write-up!