Approximate distributions
When constructing a PosteriorEstimator, one must specify a parametric family of probability distributions used to approximate the posterior distribution. These families of distributions are implemented as subtypes of the abstract type ApproximateDistribution.
Distributions
NeuralEstimators.ApproximateDistribution Type
ApproximateDistributionAn abstract supertype for approximate posterior distributions used in conjunction with a PosteriorEstimator.
Subtypes A <: ApproximateDistribution must implement the following methods:
_logdensity(q::A, θ::AbstractMatrix, t::AbstractMatrix)Used during training and therefore must support automatic differentiation.
θis ad × Kmatrix of parameter vectors.tis adstar × Kmatrix of learned summary statistics obtained by applying the neural network in thePosteriorEstimatorto a collection ofKdata sets.Should return a
1 × Kmatrix, where each entry is the log densitylog q(θₖ | tₖ)for thek-th data set evaluated at thek-th parameter vectorθ[:, k].
sampleposterior(q::A, t::AbstractMatrix, N::Integer)Used during inference and therefore does not need to be differentiable.
Should return a
Vectorof lengthK, where each element is ad × Nmatrix containingNsamples from the approximate posteriorq(θ | tₖ)for thek-th data set.
NeuralEstimators.Gaussian Type
Gaussian <: ApproximateDistribution
Gaussian(d::Integer, num_summaries::Integer; kwargs...)A Gaussian distribution for amortised inference with a PosteriorEstimator, where d is the dimension of the parameter vector.
The density of the distribution is:
where the parameters
When using a Gaussian distribution as the approximate distribution of a PosteriorEstimator, the (learned) summary statistics are mapped to the distribution parameters identity for
Keyword arguments
kwargs: additional keyword arguments passed toMLP.
NeuralEstimators.GaussianMixture Type
GaussianMixture <: ApproximateDistribution
GaussianMixture(d::Integer, num_summaries::Integer; num_components::Integer = 10, kwargs...)A mixture of Gaussian distributions foramortised inference with a PosteriorEstimator, where d is the dimension of the parameter vector.
The density of the distribution is:
where the parameters
When using a GaussianMixture as the approximate distribution of a PosteriorEstimator, the (learned) summary statistics are mapped to the mixture parameters using a multilayer perceptron (MLP) with approporiately chosen output activation functions (e.g., softmax for the mixture weights, softplus for the variance parameters).
Keyword arguments
num_components::Integer = 10: number of components in the mixture.kwargs: additional keyword arguments passed toMLP.
NeuralEstimators.NormalisingFlow Type
NormalisingFlow <: ApproximateDistribution
NormalisingFlow(d::Integer, num_summaries::Integer; num_coupling_layer = 6, backend = nothing, kwargs...)A normalising flow for amortised posterior inference (e.g., Ardizzone et al., 2019; Radev et al., 2022), where d is the dimension of the parameter vector and num_summaries is the dimension of the summary statistics for the data.
Normalising flows are diffeomorphisms (i.e., invertible, differentiable transformations with differentiable inverses) that map a simple base distribution (e.g., standard Gaussian) to a more complex target distribution (e.g., the posterior). They achieve this by applying a sequence of learned transformations, the forms of which are chosen to be invertible and allow for tractable density computation via the change of variables formula. This allows for efficient density evaluation during the training stage, and efficient sampling during the inference stage. For further details, see the reviews by Kobyzev et al. (2020) and Papamakarios (2021).
NormalisingFlow uses affine coupling blocks (see AffineCouplingBlock), with optional activation normalisation (ActNorm; Kingma and Dhariwal, 2018) and permutations applied between each block via CouplingLayer. The base distribution is taken to be a standard multivariate Gaussian distribution.
When using a NormalisingFlow as the approximate distribution of a PosteriorEstimator, the (learned) summary statistics are used to condition the affine coupling blocks at each layer.
Note
NormalisingFlow is currently only implemented for the Flux backend.
Keyword arguments
num_coupling_layers::Integer = 6: number of coupling layers.backend: the neural network backend to use (e.g.,FluxorLux). Ifnothing, the backend is inferred automatically.kwargs: additional keyword arguments passed toCouplingLayerandAffineCouplingBlock.
Methods
NeuralEstimators.numdistributionalparams Function
numdistributionalparams(q::ApproximateDistribution)
numdistributionalparams(estimator::PosteriorEstimator)The number of distributional parameters (i.e., the dimension of the space
Building blocks
NeuralEstimators.CouplingLayer Type
CouplingLayer(d, num_summaries; use_act_norm = true, use_permutation = true, kwargs...)A single coupling layer used in a NormalisingFlow, combining two AffineCouplingBlocks with optional activation normalisation and permutation.
The layer splits its d-dimensional input into two halves of dimensions d₁ = ⌊d/2⌋ and d₂ = ⌈d/2⌉, passes them through a two-in-one affine coupling block (so that all components are transformed in a single forward pass), and optionally applies activation normalisation (ActNorm) and a random Permutation to decorrelate the inputs across layers.
The argument num_summaries is the dimension of the conditioning summary statistics (see PosteriorEstimator), and kwargs are passed to AffineCouplingBlock.
NeuralEstimators.AffineCouplingBlock Type
AffineCouplingBlock(κ₁::MLP, κ₂::MLP)
AffineCouplingBlock(d₁::Integer, num_summaries::Integer, d₂; kwargs...)An affine coupling block used in a NormalisingFlow.
An affine coupling block splits its input
where PosteriorEstimator).
To prevent numerical overflows and stabilise the training of the model, the scaling factors
where
Additional keyword arguments kwargs are passed to the MLP constructor when creating κ₁ and κ₂.
NeuralEstimators.ActNorm Type
ActNorm(scale, bias)
ActNorm(d::Integer)Activation normalisation layer Kingma and Dhariwal, 2018 for an input of dimension d.
NeuralEstimators.Permutation Type
Permutation(in::Integer)A layer that permutes the inputs (of dimension in) entering a coupling block.
Variables need to be permuted between coupling blocks in order for all input components to (eventually) be transformed. Note also that permutations are always invertible with absolute Jacobian determinant equal to 1.
source