API Reference

MarginalLogDensities.CubatureType
Cubature([; solver=LBFGS(), adtype=AutoForwardDiff(), upper=nothing, lower=nothing, nσ=6; opt_func_kwargs...])

Construct a Cubature marginalizer to integrate out marginal variables via numerical integration (a.k.a. cubature).

If explicit upper and lower bounds for the integration are not supplied, this marginalizer will attempt to select good ones by first optimizing the marginal variables, doing a Laplace approximation at their mode, and then going standard deviations away on either side, assuming approximate normality.

The integration is performed using hcubature from Cubature.jl.

Arguments

  • solver=LBFGS() : Algorithm to use when performing the inner optimization to find the

mode of the marginalized variables. Can be any algorithm defined in Optim.jl.

  • adtype=AutoForwardDiff() : Automatic differentiation type to use for the

inner optimization. AutoForwardDiff() is robust and fast for small problems; for larger ones AutoReverseDiff() or AutoZygote() are likely better.

  • upper, lower : Optional upper and lower bounds for the numerical integration. If

supplied, they must be numeric vectors the same length as the marginal variables.

  • nσ=6.0 : If upper and lower are not supplied, integrate this many standard

deviations away from the mode based on a Laplace approximation to the curvature at that point.

  • opt_func_kwargs : Optional keyword arguments passed on to

Optimization.OptimizationFunction.

source
MarginalLogDensities.LaplaceApproxType
LaplaceApprox([solver=LBFGS() [; adtype=AutoForwardDiff(), opt_func_kwargs...]])

Construct a LaplaceApprox marginalizer to integrate out marginal variables via the Laplace approximation. This method will usually be faster than Cubature, especially in high dimensions, though it may not be as accurate.

Arguments

  • solver=LBFGS() : Algorithm to use when performing the inner optimization to find the

mode of the marginalized variables. Can be any algorithm defined in Optim.jl.

  • adtype=AutoForwardDiff() : Automatic differentiation type to use for the inner

optimization, specified via the ADTypes.jl interface. AutoForwardDiff() is robust and fast for small problems; for larger ones AutoReverseDiff() or AutoZygote() are likely better.

  • opt_func_kwargs : Optional keyword arguments passed on to

Optimization.OptimizationFunction.

source
MarginalLogDensities.MarginalLogDensityType
MarginalLogDensity(logdensity, u, iw, data, [method=LaplaceApprox(); 
[hess_adtype=nothing, sparsity_detector=DenseSparsityDetector(method.adtype, atol=cbrt(eps())),
coloring_algorithm=GreedyColoringAlgorithm()]])

Construct a callable object which wraps the function logdensity and integrates over a subset of its arguments.

The resulting MarginalLogDensity object mld can then be called like a function as mld(v, data), where v is the subset of the full parameter vector u which is not indexed by iw. If length(u) == n and length(iw) == m, then length(v) == n-m.

Arguments

  • logdensity : function with signature (u, data) returning a positive

log-probability (e.g. a log-pdf, log-likelihood, or log-posterior). In this function, u is a vector of variable parameters and data is an object (Array, NamedTuple, or whatever) that contains data and/or fixed parameters.

  • u : Vector of initial values for the parameter vector.
  • iw : Vector of indices indicating which elements of u should be marginalized.
  • data=() : Optional argument
  • method : How to perform the marginalization. Defaults to LaplaceApprox(); Cubature()

is also available.

  • hess_adtype = nothing : Specifies how to calculate the Hessian of the marginalized

variables. If not specified, defaults to a sparse second-order method using forward AD over the AD type given in the method (AutoForwardDiff() is the default). Other backends can be set by loading the appropriate AD package and using the ADTypes.jl interface.

  • sparsity_detector = DenseSparsityDetector(method.adtype, atol=cbrt(eps)) : How to

perform the sparsity detection. Detecting sparsity takes some time and may not be worth it for small problems, but for larger problems it can be extremely worth it. The default DenseSparsityDetector is most robust, but if it's too slow, or if you're running out of memory on a larger problem, try the tracing-based dectectors from SparseConnectivityTracer.jl.

  • coloring_algorithm = GreedyColoringAlgorithm() : How to determine the matrix "colors"

to compress the sparse Hessian.

Examples

julia> using MarginalLogDensities, Distributions

julia> N = 4

julia> dist = MvNormal(I(3))

julia> data = (N=N, dist=dist)

julia> function logdensity(u, data) # arbitrary simple density function
           return logpdf(data.dist, u) 
       end

julia> u0 = rand(N)

julia> mld = MarginalLogDensity(logdensity, u0, [1, 3], data)

julia> mld(rand(2), data)
source
MarginalLogDensities.cached_paramsMethod

Get the value of the cached parameter vector u. This includes the latest values given for the non-marginalized variables v, as well as the modal values of the marginalized variables w conditional on v.

source
MarginalLogDensities.split_parametersMethod

Split the vector of all parameters u into its estimated (fixed) components v and marginalized (random) components w, based on their indices iv and iw. components

source