Technical Area: Uncertainty Quantification
Models of physical systems typically involve inputs/parameters that are determined from empirical measurements, and therefore exhibit a certain degree of uncertainty. Estimating the propagation of this uncertainty into computational model output predictions is crucial for purposes of model validation, hypothesis testing, model optimization, and decision support. FASTMath researchers are involved in the development of probabilistic methods and software for efficient uncertainty quantification (UQ) in computational models. Approaches under consideration are given below.
Surrogate models:
We are working on the development of tensor network surrogates to address challenges related to sparse training data, nonlinear models, and high-dimensionality.
Manifold learning:
We are adapting diffusion map manifold distance metrics to training data and underlying physics, yielding manifolds that better represent prior knowledge. We are targeting the use of manifolds for Bayesian learning in nonlinear and chaotic dynamical systems.
Bayesian methods:
We are developing methods that employ global and local surrogates for use in Bayesian inference. We emphasize scalability to high-dimensional parameter spaces and fast convergence on modern computing architectures. We are also developing a unified Bayesian framework for model structural error estimation and model comparison/selection, and working on methods and software for Bayesian optimization and optimal experimental design.
Multifidelity UQ:
We are developing non-hierarchical multifidelity UQ approaches. Our methods leverage a variety of models, of varying cost and accuracy, and produce estimates of uncertainty with certifiable accuracy at a fraction of the cost of methods using only a single high-fidelity model.
UQ software:
Our UQ tools and methods are made available to the broader scientific user community through two UQ software products, UQTk and Dakota. Our UQTk plans involve incorporating recent algorithmic developments, and coupling with Dakota and the MIT UQ library (MUQ). For Dakota, we target: a) deployment of new inference capabilities from MUQ; b) exploitation of dimension reduction within multifidelity methods; and c) deployment of goal-oriented multifidelity methods that support a range of optimization under uncertainty targets.