In August 2026, I will join the Oden Institute and ASE/EM faculty at UT. Prospective Ph.D. students and postdocs are welcome to contact me by email.
Incoming Assistant Professor
The University of Texas at Austin
I am joining UT Austin as an Assistant Professor in August 2026. My research group has several openings at the Ph.D. and postdoc level.
Welcome! I am an incoming Assistant Professor at UT Austin starting August 2026, where I will hold a joint appointment in the Oden Institute and the Department of ASE/EM. Until then, I am a Klarman Fellow in the Department of Mathematics at Cornell University, where I am hosted by Prof. Alex Townsend and Prof. Yunan Yang. Broadly, my research interests lie at the intersection of computational mathematics and statistics. Using rigorous analysis and domain-specific insight, I develop novel artificial intelligence methods for high- or infinite-dimensional problems, establish theoretical guarantees on the reliability and trustworthiness of these methods, and apply the methods in the physical and information sciences. My work blends operator learning with ideas from inverse problems, generative modeling, and uncertainty quantification. A current focus of my research centers on machine learning tasks formulated in the space of probability distributions.
Previously, I was an NSF Postdoctoral Fellow in the Department of Mathematics at MIT. I received my Ph.D. from Caltech in 2024, where I worked with Prof. Andrew M. Stuart and was supported by the Amazon AI4Science Fellows Program and an NSF Graduate Research Fellowship. My doctoral dissertation was awarded two "best thesis" prizes, one in applied mathematics and another in engineering. I obtained my M.Sc. from Caltech in 2020 and my B.Sc. (Mathematics), B.S.M.E., and B.S.A.E. degrees from Oklahoma State University in 2018.
nnelsen [at] oden [dot] utexas [dot] edu
2026/04 (new): I am giving two invited talks about my work on operator learning + inverse problems: one in the Laboratory for Applied Mathematics, Numerical Software, and Statistics (LANS) Seminar at Argonne National Laboratory and another in the IMA Data Science Seminar at the University of Minnesota.
2026/03 (new): In a new preprint with Simone Brugiapaglia and Nicola Rares Franco, we give A short tour of operator learning theory by reviewing known sample complexity bounds for trained neural operators, contrasting them with minimax statistical limits, and highlighting key open problems.
2025/12: Our survey chapter on "Operator learning meets inverse problems" has been accepted in the Handbook of Numerical Analysis, Vol. 27: Machine Learning Solutions for Inverse Problems. I am excited to present this and recent work on approximating EIT in my plenary talk at the Inverse Days 2025 conference in Helsinki, Finland.
2025/11: Neural operators are universal, but can they approximate data-to-parameter solution maps of nonlinear inverse problems? In a new preprint on the extension and neural operator approximation of the electrical impedance tomography inverse map, we provide an affirmative answer both theoretically and numerically, even when the measurements are noisy. This is joint work with Maarten de Hoop, Nikola Kovachki, and Matti Lassas.