News Archive
2023
2023/06: I am participating in the INdAM Learning for Inverse Problems workshop in Rome, Italy and the BIRS workshop on Scientific Machine Learning at the Banff Centre in Alberta, Canada.
2023/05: Our new preprint establishes state-of-the-art Error Bounds for Learning with Vector-Valued Random Features (joint work with Samuel Lanthaler). The theory holds in a general infinite-dimensional setting (applying to operator learning in particular) and is developed with a matrix-free analysis. This leads to the sharpest known rates (free of log factors) for random feature ridge regression to date.
2023/05: My paper on linear operator learning was published in the SIAM/ASA Journal on Uncertainty Quantification.
2023/05: I am giving an invited talk in the Level Set Seminar at the UCLA Department of Mathematics.
2023/04: I am speaking at the Oden Institute's inaugural Workshop on Scientific Machine Learning at UT Austin, the Workshop on Establishing Benchmarks for Data-Driven Modeling of Physical Systems at USC, and the Southern California Applied Mathematics Symposium (SoCAMS) at UC Irvine.
2023/03: I have been selected as a 2022-2023 Amazon/Caltech AI4Science Fellow! The program recognizes researchers that have had a remarkable impact in artificial intelligence and machine learning, and in their application to fields beyond computer science.
2023/02: I am giving an invited talk about "Learning the Electrical Impedance Tomography Inversion Operator" in MS46: Goal-Oriented and Context-Aware Scientific Machine Learning, part of SIAM CSE23 in Amsterdam, The Netherlands. There, I am also co-organizing MS370 and MS406: Operator Learning in the Physical and Data Sciences, Parts I & II.
2022
2022/12: I participated in the International Conference on New Trends in Computational and Data Sciences at Caltech.
2022/11: Our paper on the theory of linear operator learning was accepted for publication in the SIAM/ASA Journal on Uncertainty Quantification.
2022/09: Professor Joel Tropp's course lecture notes on "Matrix Analysis" are now publicly available and include chapter III.8 that I wrote on the topic of "Operator-Valued Kernels."
2022/09: I am giving an invited talk about "Scalable Uncertainty Quantification with Random Features" in MS85: Recent Advances in Kernel Methods for Computing and Learning, part of SIAM MDS22 in San Diego, CA. There, I am also co-organizing MS81: Provable Guarantees for Learning Dynamical Systems.
2022/08: I am giving an invited virtual talk about my joint work on operator learning in MS1714: Advances in Scientific Machine Learning for High-Dimensional Many-Query Problems, part of the WCCM--APCOM in Yokohama, Japan.
2022/06: An improved version of my work on linear operator learning is now available on arXiv. In it, three fundamental principles reveal the types of linear operators, types of training data, and types of distribution shift that lead to reduced sample size requirements for supervised learning in infinite dimensions.
2022/06: I am giving an invited talk about our work on learned surrogates for parametric PDEs in MS210: Reduced-Order and Surrogate Models for Mechanics of Porous Media, part of the Engineering Mechanics Institute Conference at Johns Hopkins University, Baltimore, MD.
2022/05: I am giving an invited virtual talk about "Noisy Linear Operator Learning as an Inverse Problem" in WS3: PDE-constrained Bayesian Inverse Problems, part of the Computational Uncertainty Quantification thematic programme at the Erwin Schrödinger Institute in Vienna, Austria.
2022/05: I am giving an invited virtual talk about "Bayesian Posterior Contraction for Linear Operator Learning" at the AMS Spring Western Sectional Meeting Special Session on Mathematical Advances in Bayesian Statistical Inversion and Markov Chain Monte Carlo Sampling Algorithms.
2022/04: I am co-organizing a minisymposium on Operator Learning in PDEs, Inverse Problems, and UQ at SIAM UQ22 in Atlanta, GA, where I will also be speaking about our recent work on "Convergence Rates for Learning Linear Operators from Noisy Data".
2022/01: This year I am co-organizing the Caltech Department of Computing and Mathematical Sciences CMX Student/Postdoc Seminar.
2021
2021/11: I am virtually attending the Deep Learning and Partial Differential Equations workshop as a part of the Mathematics of Deep Learning programme at the Isaac Newton Institute for Mathematical Sciences, Cambridge UK.
2021/10: I am virtually attending the Statistical Aspects of Non-Linear Inverse Problems workshop hosted by the Banff International Research Station for Mathematical Innovation and Discovery (BIRS) as an invited participant from October 31st to November 5th.
2021/10: I am giving an invited talk at the Caltech CMX Student Seminar on some of my Ph.D. work on operator regression.
2021/09: My paper on Banach space random feature methods, joint work with A.M. Stuart, was published in the SIAM Journal on Scientific Computing.
2021/09: I am virtually attending the Deep Learning and Inverse Problems workshop as a part of the Mathematics of Deep Learning programme at the Isaac Newton Institute for Mathematical Sciences, Cambridge UK.
2021/08: My new preprint on "Convergence Rates for Learning Linear Operators from Noisy Data," joint with M.V. de Hoop, N.B. Kovachki, and A.M. Stuart, is now available. In it, we prove that a class of compact, bounded, and even unbounded operators can be stably estimated from noisy input-output pairs.
2021/07: I gave a SIAM AN21 talk on July 19th titled "Function Space Random Feature Methods for Learning Parametric PDE Solution Operators," with particular emphasis on fast learned surrogates for Bayesian inverse problems.
2021/06: I presented my forthcoming joint work on "Learning Unbounded Operators" to the Geo-Mathematical Imaging Group at Rice University on June 15th.
2021/04: I was admitted to candidacy for the Ph.D. degree.
2021/03: At SIAM CSE21, I co-organized (with Nathaniel Trask and Ravi Patel) the virtual minisymposiums "Learning Operators from Data" and "Machine Learning for Surrogate Model and Operator Discovery."
2021/01: I was invited to speak at the SIAM Annual Meeting (AN21) virtual minisymposium "Deep Learning for High-Dimensional Parametric PDEs" in July; looking forward to it!
2020
2020/12: I participated in the virtual Workshop on Mathematical Machine Learning and Applications hosted by the CCMA at Penn State.
2020/11: I virtually gave an invited talk about my work on random feature methods for parametric PDEs in the numerical analysis and machine learning reading group seminar at the Courant Institute of Mathematical Sciences, New York University.
2020/09: I virtually gave an invited talk (both live and pre-recorded) in the Kernel Methods session of the Second Symposium on Machine Learning and Dynamical Systems at The Fields Institute, Toronto, Canada.
2020/07: I participated in the virtual Learning Models from Data: Model Reduction, System Identification and Machine Learning GAMM Juniors’ Summer School on Applied Mathematics and Mechanics at the Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany, where I presented a poster. I also attended the MSML2020 online conference at Princeton University.
2020/04: I virtually attended Workshop II: PDE and Inverse Problem Methods in Machine Learning at the IPAM High-Dimensional Hamilton-Jacobi PDEs long program at UCLA, Los Angeles, CA.
2020/02: I participated in the Inverse Problems: Algorithms, Analysis and Applications workshop at Caltech, through the CMX group in the CMS department.