Nested sampling algorithm

From Wikipedia, the free encyclopedia
(Redirected from Nested sampling)
Jump to navigation Jump to search

The nested sampling algorithm is a computational approach to the Bayesian statistics problems of comparing models and generating samples from posterior distributions. It was developed in 2004 by physicist John Skilling.[1]

Background

[edit | edit source]

Bayes' theorem can be used for model selection, where one has a pair of competing models M1 and M2 for data D, one of which may be true (though which one is unknown) but which both cannot be true simultaneously. Bayesian model selection provides a method for assessing the Bayes factor, which gives the relative merit of each model.

The posterior probability for M1 may be calculated as:

P(M1D)=P(DM1)P(M1)P(D)=P(DM1)P(M1)P(DM1)P(M1)+P(DM2)P(M2)=11+P(DM2)P(DM1)P(M2)P(M1)

The prior probabilities M1 and M2 are already known, as they are chosen by the researcher ahead of time. However, the remaining Bayes factor P(DM2)/P(DM1) is not so easy to evaluate, since in general it requires marginalizing nuisance parameters. Generally, M1 has a set of parameters that can be grouped together and called θ, and M2 has its own vector of parameters that may be of different dimensionality, but is still termed θ. The marginalization for M1 is

P(DM1)=dθP(Dθ,M1)P(θM1)

and likewise for M2. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution P(θD,M1).[2] It is an alternative to methods from the Bayesian literature[3] such as bridge sampling and defensive importance sampling.

Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density Z=P(DM) where M is M1 or M2:

Start with N points θ1,,θN sampled from prior.
for i=1 to j do        % The number of iterations j is chosen by guesswork.
    Li:=min(current likelihood values of the points);
    Xi:=exp(i/N);
    wi:=Xi1Xi
    Z:=Z+Liwi;
    Save the point with least likelihood as a sample point with weight wi.
    Update the point with least likelihood by sampling from the prior restricted to likelihoods above Li, for example with Markov chain Monte Carlo.
end
return Z;

At each iteration, Xi is an estimate of the amount of prior mass covered by the hypervolume in parameter space of all points with likelihood greater than θi. The weight factor wi is an estimate of the amount of prior mass that lies between two nested hypersurfaces {θP(Dθ,M)=P(Dθi1,M)} and {θP(Dθ,M)=P(Dθi,M)}. The update step Z:=Z+Liwi computes the sum over i of Liwi to numerically approximate the integral

P(DM)=P(Dθ,M)P(θM)dθ=P(Dθ,M)dP(θM)

In the limit j, this estimator has a positive bias of order 1/N[4] which can be removed by using (11/N) instead of the exp(1/N) in the above algorithm.

The idea is to subdivide the range of f(θ)=P(Dθ,M) and estimate, for each interval [f(θi1),f(θi)], how likely it is a priori that a randomly chosen θ would map to this interval. This can be thought of as a Bayesian's way to numerically implement Lebesgue integration.[5][6]

Likelihood-restricted prior sampling algorithms

[edit | edit source]

The point with least likelihood can be updated with some Markov chain Monte Carlo steps according to the prior, accepting only steps that keep the likelihood above Li. The original procedure outlined by Skilling (given above in pseudocode) does not specify what specific algorithm should be used to choose new points with better likelihood, but several algorithms have been developed.[7]

Skilling's own code examples (such as one in Sivia and Skilling (2006),[8] available on Skilling's website) chooses a random existing point and selects a nearby point chosen by a random distance from the existing point; if the likelihood is better, then the point is accepted, else it is rejected and the process repeated. Subsequently, a variety of MCMC algorithms tailored for nested sampling have been developed, including slice sampling,[5] popularized by PolyChord, and constrained Hamiltonian Monte Carlo.[9]

An alternative line of algorithms is based on rejection sampling. Mukherjee et al. (2006)[10] found higher acceptance rates by selecting points randomly within an ellipsoid drawn around the existing points; this idea was refined by the MultiNest algorithm[11] which handles multimodal posteriors better using multiple ellipsoids built from clustering of the live points. Rejection methods can be efficient up to 20-30 dimensions.[7]

Implementations

[edit | edit source]

Example implementations demonstrating the nested sampling algorithm are publicly available for download, written in several programming languages.

  • Simple examples in C, R, or Python are on John Skilling's website.
  • A Haskell port of the above simple codes is on Hackage.
  • An example in R originally designed for fitting spectra is described on Bojan Nikolic's website and is available on GitHub.
  • A NestedSampler is part of the Python toolbox BayesicFitting[12] for generic model fitting and evidence calculation. It is available on GitHub.
  • An implementation in C++, named DIAMONDS, is on GitHub.
  • A highly modular Python parallel example for statistical physics and condensed matter physics uses is on GitHub.
  • pymatnest is a package designed for exploring the energy landscape of different materials, calculating thermodynamic variables at arbitrary temperatures and locating phase transitions is on GitHub
  • The MultiNest software package is capable of performing nested sampling on multi-modal posterior distributions.[11][13] It has interfaces for C++, Fortran and Python inputs, and is available on GitHub.
  • PolyChord is another nested sampling software package available on GitHub. PolyChord's computational efficiency scales better with an increase in the number of parameters than MultiNest, meaning PolyChord can be more efficient for high dimensional problems.[14] It has interfaces to likelihood functions written in Python, Fortran, C, or C++. PolyChord can be used jointly with Cobaya,[15] a Python-based code for Bayesian analysis of hierarchical physical models. Cobaya facilitates exploration of posteriors using various Monte Carlo samplers, allows for maximization and importance-reweighting of samples, and includes interfaces to cosmological theory codes and likelihoods.
  • NestedSamplers.jl, a Julia package for implementing single- and multi-ellipsoidal nested sampling algorithms is on GitHub.
  • Korali is a high-performance framework for uncertainty quantification, optimization, and deep reinforcement learning, which also implements nested sampling.
  • The UltraNest software package implements a fast MPI-capable, generalized multi-ellipsoidal dynamic nested sampling algorithm. The user can also choose slice sampling algorithms. Written in Python, it has interfaces for Python, C, Fortran, Python, R and Julia and is available on GitHub.

Applications

[edit | edit source]

Since nested sampling was proposed in 2004, it has been used in many science areas,[6] in particular in astronomy. One paper suggested using nested sampling for cosmological model selection and object detection, as it "uniquely combines accuracy, general applicability and computational feasibility."[10] A refinement of the algorithm to handle multimodal posteriors has been suggested as a means to detect astronomical objects in extant datasets.[13] Other applications of nested sampling are in the field of finite element updating where the algorithm is used to choose an optimal finite element model, and this was applied to structural dynamics.[16] This sampling method has also been used in the field of materials modeling. It can be used to learn the partition function from statistical mechanics and derive thermodynamic properties.[17]

Diagnostics

[edit | edit source]

Dedicated diagnostics for nested sampling have been developed for verifying that a nested sampling run is performing well. This includes a U test that the rank of the likelihood of the replacement point is uniformly distributed among the live points,[18][7] Markov Chain Monte Carlo jump distance,[19] comparison of the consistency of several independent nested sampling runs,[20][21] including reruns with increased number of MCMC steps. The computation can also be checked with generically applicable techniques such as simulation-based calibration.[22]

Dynamic nested sampling

[edit | edit source]

Dynamic nested sampling is a generalisation of the nested sampling algorithm in which the number of samples taken in different regions of the parameter space is dynamically adjusted to maximise calculation accuracy.[23] This can lead to improvements in accuracy and computational efficiency when compared to the original nested sampling algorithm, in which the allocation of samples cannot be changed and often many samples are taken in regions which have little effect on calculation accuracy.

Publicly available dynamic nested sampling software packages include:

  • dynesty – a Python implementation of dynamic nested sampling which can be downloaded from GitHub.[24]
  • dyPolyChord: a software package which can be used with Python, C++ and Fortran likelihood and prior distributions.[25] dyPolyChord is available on GitHub.
  • UltraNest (see above).

Dynamic nested sampling has been applied to a variety of scientific problems, including analysis of gravitational waves,[26] mapping distances in space[27] and exoplanet detection.[28]

See also

[edit | edit source]

References

[edit | edit source]
  1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  4. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  5. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  6. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  7. ^ a b c Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  8. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  9. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  10. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  11. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  12. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  13. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  14. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  15. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  16. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  17. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  18. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  19. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  20. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  21. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  22. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  23. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  24. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  25. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  26. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  27. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  28. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).