Overview
Neural inferential methods have marked practical appeal, as their implementation is only loosely connected to the statistical or physical model being considered. The workflow when using the package NeuralEstimators
is as follows:
- Sample parameters from the prior, $\pi(\boldsymbol{\theta})$, to form training/validation/test parameter sets. Alternatively, define a function to sample parameters dynamically during training. Parameters are stored as $d \times K$ matrices, with $d$ the dimensionality of the parameter vector and $K$ the number of parameter vectors in the given parameter set.
- Simulate data from the model conditional on the above parameter sets, to form training/validation/test data sets. Alternatively, define a function to simulate data dynamically during training. Data are stored as objects of type
Vector{A}
, where each element of the vector is associated with one parameter vector, and the subtypeA
depends on the multivariate structure of the data (e.g., aMatrix
for unstructured multivariate data, a multidimensionalArray
for gridded data, or aGNNGraph
for graphical or irregular spatial data). - If constructing a neural posterior estimator, choose an approximate posterior distribution $q(\boldsymbol{\theta}; \boldsymbol{\kappa})$.
- Design and initialise a suitable neural network. The architecture class (e.g., MLP, CNN, GNN) should align with the multivariate structure of the data (e.g., unstructured, grid, graph). The specific input and output spaces depend on the chosen inferential method:
- For neural Bayes estimators, the neural network is a mapping $\mathcal{Z}\to\Theta$, where $\mathcal{Z}$ denotes the sample space and $\Theta$ denotes the parameter space.
- For neural posterior estimators, the neural network is a mapping $\mathcal{Z}\to\mathcal{K}$, where $\mathcal{K}$ denotes the space of the approximate-distribution parameters $\boldsymbol{\kappa}$.
- For neural ratio estimators, the neural network is a mapping $\mathcal{Z}\times\Theta\to\mathbb{R}$.
DeepSet
serves as a convenient wrapper for embedding standard neural networks (e.g., MLPs, CNNs, GNNs) in a framework for making inference with an arbitrary number of independent replicates, and it comes with pre-defined methods for handling the transformations from a $K$-dimensional vector of data to a matrix output. - Wrap the neural network (and possibly the approximate distribution) in a subtype of
NeuralEstimator
corresponding to the intended inferential method:- For neural Bayes estimators under general, user-defined loss functions, use
PointEstimator
; - For neural posterior estimators, use
PosteriorEstimator
; - For neural ratio estimators, use
RatioEstimator
.
- For neural Bayes estimators under general, user-defined loss functions, use
- Train the
NeuralEstimator
usingtrain()
and the training set, monitoring performance and convergence using the validation set. For generic neural Bayes estimators, specify a loss function. - Assess the
NeuralEstimator
usingassess()
and the test set.
Once the NeuralEstimator
has passed our assessments and is deemed to be well calibrated, it may be used to make inference with observed data.
Next, see the Examples and, once familiar with the basic workflow, see Advanced usage for further practical considerations on how to most effectively construct neural estimators.