Simons Hour Talk

Multi-Fidelity Methods
Date
Feb 2, 2023, 10:00 am – 11:00 am
Location
Virtual

Details

Event Description

Title: Leveraging Machine Learning for Large-Scale Multi-Fidelity Uncertainty Quantification
Abstract: Recent advances in computational science and high-performance computing enable the simulation of large-scale real-world problems such as turbulent transport in magnetic confinement devices with ever-increasing realism and accuracy. However, these simulations remain computationally expensive even on large supercomputers, which prevents straightforward approaches to important many-query applications such as uncertainty quantification. In contrast, data-driven machine-learning methods such as those based on deep neural networks provide computationally cheaper low-fidelity models, but typically require large training sets of high-fidelity model evaluations to be predictive, which hampers a straightforward application in large-scale, computationally expensive problems. In this presentation, we demonstrate that data-driven low-fidelity models learned from few data samples can nevertheless be effectively used for large-scale uncertainty quantification: The key is to combine these low-fidelity models with the high-fidelity model in a multi-fidelity fashion. The first part of this presentation focuses on a multi-fidelity Monte Carlo sampling approach in which a hierarchy of data-driven low-fidelity models is constructed using both the full set of uncertain inputs and subsets comprising only selected, important parameters. We illustrate the proposed method in a plasma micro-turbulence simulation scenario concerning turbulence suppression via energetic particles with $14$ stochastic parameters, demonstrating that it is about two orders of magnitude more efficient than standard Monte Carlo methods measured in single-core performance. This translates into a runtime reduction from around eight days to one hour on 240 cores on parallel machines.
The second part of this presentation introduces a context-aware multi-fidelity Monte Carlo method that optimally balances the costs of training low-fidelity models with the costs of Monte Carlo sampling. Our theory shows that low-fidelity models can be overtrained, which is in stark contrast to traditional surrogate modeling and model reduction techniques that construct low-fidelity models with the primary goal of approximating well the high-fidelity model outputs. Numerical experiments in a plasma micro-turbulence simulation scenario with 12 uncertain inputs show speedups of up to two orders of magnitude compared to standard methods, which corresponds to a runtime reduction from 72 days to about four hours on 32 cores on parallel machines.

Talk time in other timezones:
AEDT  2:00 AM Fri 03 Feb,
JST  12:00 AM Fri 03 Feb,
CET  16:00    Thu 02 Feb,
GMT  15:00    Thu 02 Feb,
UTC  15:00    Thu 02 Feb,
EST  10:00 AM Thu 02 Feb,
CST   9:00 AM Thu 02 Feb,
MST   8:00 AM Thu 02 Feb
Sponsor
Simons Foundation