polychord sampler

PolyChord is an advanced nested sampler, that uses slice sampling to sample within the nested isolikelihoods contours. The use of slice sampling instead of rejection sampling makes PolyChord specially suited for high-dimensional parameter spaces, and allows for exploiting speed hierarchies in the parameter space. Also, PolyChord can explore multi-modal distributions efficiently.

PolyChord is an external sampler, not installed by default (just a wrapper for it). You need to install it yourself following the installation instructions below.


To use PolyChord, you just need to mention it in the sampler block:

    # polychord options ...

Just copy the options that you wish to modify from the defaults file below:

# Default arguments for the PolyChord sampler

# Path to the PolyChord installation folder, if not installed with cobaya-install,
# or 'global' to force global-scope import
path:  # use only if installed manually!
# Optional parameters for the sampler:
# ('Xd' means 'X steps per dimension')
# -------------------------------------------------
# Number of live points
nlive: 25d
# Number of slice-sampling steps each new sample.
# Helps with curved degeneracies and thin "corners".
# Recommended: '5d' if accurate evidences needed or weird posterior shapes.
# You may want to use `d` for exploratory runs.
num_repeats: 2d
# Number of prior samples drawn before starting compression
# Can be in units of nlive (but not dimension) as Xnlive
nprior: 10nlive
# Number of failed spawns before stopping nested sampling.
nfail : nlive
# Whether to check for and explore multi-modality on the posterior
do_clustering: True
# Stopping criterion: fraction of the total evidence contained in the live points
precision_criterion: 0.001
# Stopping criterion (alt): maximum number of nested sampling iterations
max_ndead: .inf
# How often to print progress, update output files and do clustering
# -- increase for more frequency (1 for update and callback per dead point)
compression_factor: 0.36787944117144233  # = exp(-1)
# Callback function -- see documentation
# Numerical value of log(0) sent to PolyChord's Fortran code
# If null: `numpy.nan_to_num(-numpy.inf)`
# NB: smaller values tend to produce errors when using `ifort`
logzero: -1e30
# Increase number of posterior samples
boost_posterior: 0  # increase up to `num_repeats`
# Verbosity during the sampling process. Set to one of [0,1,2,3]
feedback:  # default: Same as global `verbosity`
# Parallelise with synchronous workers, rather than asynchronous ones.
# This can be set to False if the likelihood speed is known to be
# approximately constant across the parameter space. Synchronous
# parallelisation is less effective than asynchronous by a factor ~O(1)
# for large parallelisation.
synchronous : True
# Variable number of live points option. This dictionary is a mapping
# between loglike contours and nlive.
# You should still set nlive to be a sensible number, as this indicates
# how often to update the clustering, and to define the default value.
nlives : {}
# Perform maximisation at the end of the run to find the maximum
# likelihood point and value
maximise : False
# Exploiting speed hierarchy
# --------------------------
# whether to measure actual speeds for your machine/threading at starting rather
# than using stored values
measure_speeds: True
# Amount of oversampling of each parameter block, relative to their speeds
# Value from 0 (no oversampling) to 1 (spend the same amount of time in all blocks)
# Can be larger than 1 if extra oversampling of fast blocks required.
oversample_power: 0.4
# Manual speed blocking
# ---------------------
# To take full advantage of speed-blocking, sort the blocks by ascending speeds
# (unless not possible due to the architecture of your likelihood)
#  - [speed_1, [params_1]]
#  - etc.
# Treatment of unbounded parameters: confidence level to use
# ----------------------------------------------------------
# (Use with care if there are likelihood modes close to the edge of the prior)
confidence_for_unbounded: 0.9999995  # 5 sigmas of the prior
# Seeding runs
# ------------
seed:  # positive integer
# Raw output of PolyChord (no need to change them, normally)
# ----------------------------------------------------------
posteriors: True
equals: True
cluster_posteriors: True
write_resume: True
read_resume: True
write_stats: True
write_live: True
write_dead: True
write_prior: True


If you want to sample with PolyChord, your priors need to be bounded. This is because PolyChord samples uniformly from a bounded hypercube, defined by a non-trivial transformation for general unbounded priors.

The option confidence_for_unbounded will automatically bind the priors at 5-sigma c.l., but this may cause problems with likelihood modes at the edge of the prior. In those cases, check for stability with respect to increasing that parameter. Of course, if confidence_for_unbounded is much smaller than unity, the resulting evidence may be biased towards a lower value.

The main output is the Monte Carlo sample of sequentially discarded live points, saved in the standard sample format together with the input.yaml and full.yaml files (see Output). The raw PolyChord products are saved in a subfolder of the output folder (determined by the option base_dir – default: [your_output_prefix]_polychord_raw). Since PolyChord is a nested sampler integrator, the log-evidence and its standard deviation are also returned.

If the posterior was found to be multi-modal, the output will include separate samples and evidences for each of the modes.

Computing Bayes ratios

If you are using only internal priors, your full prior is correctly normalized, so you can directly use the output evidence of PolyChord, e.g. in a Bayes ratio.

If you are using external priors (as described here), the full prior is not normalized by default, so the resulting evidence, \(\mathcal{Z}_\mathrm{u}\), needs to be divided by the prior volume. To compute the prior volume \(\mathcal{Z}_\pi\), substitute your likelihoods for the unit likelihood one. The normalized likelihood and its propagated error are in this case:

\[\begin{split}\log\mathcal{Z} &= \log\mathcal{Z}_\mathrm{u} - \log\mathcal{Z}_\pi\\ \sigma(\log\mathcal{Z}) &= \sigma(\log\mathcal{Z}_\mathrm{u}) + \sigma(\log\mathcal{Z}_\pi)\end{split}\]


If any of the priors specified in the prior block or any of the likelihoods has large unphysical regions, i.e. regions of null prior or likelihood, you may want to increase the nprior parameter of PolyChord to a higher multiple of nlive (default 10nlive), depending on the complexity of the unphysical region.

Why? The unphysical fraction of the parameter space, which is automatically subtracted to the raw PolyChord result, is guessed from a prior sample of size nprior, so the higher that sample is, the smaller the bias introduced by its misestimation.

Increasing nprior will make your run slower to initialize, but will not significantly affect the total duration.

Notice that this behaviour differs from stock versions of popular nested samplers (MultiNest and PolyChord), which simply ignore unphysical points; but we have chosen to take them into account to keep the value of the prior density meaningful: otherwise by simply ignoring unphysical points, doubling the prior size (so halving its density) beyond physical limits would have no effect on the evidence.

Taking advantage of a speed hierarchy

Cobaya’s PolyChord wrapper automatically sorts parameters optimally, and chooses the number of repeats per likelihood according to the value of the oversampling_power property. You can also specify the blocking and oversampling by hand using the blocking option. For more thorough documentation of these options, check the corresponding section in the MCMC sampler page (just ignore the references to drag, which do not apply here).

Callback functions

A callback function can be specified through the callback_function option. It must be a function of a single argument, which at runtime is the current instance of the polychord sampler. You can access its attributes and methods inside your function, including the collection of live and dead points, the current calculation of the evidence, and the model (of which prior and likelihood are attributes). For example, the following callback function would print the number of points added to the chain since the last callback, the current evidence estimate and the maximum likelihood point:

def callback(sampler):
    print("There are %d dead points. The last %d were added since the last callback."%(
        len(sampler.dead), len(sampler.dead) - sampler.last_point_callback))
    print("Current logZ = %g +/- %g"%(sampler.logZ, sampler.logZstd))
    # Maximum likelihood: since we sample over the posterior, it may be "dead"!
    min_chi2_dead = sampler.dead[sampler.dead["chi2"].values.argmin()]
    # At the end of the run, the list of live points is empty
        min_chi2_live = sampler.live[sampler.live["chi2"].values.argmin()]
        min_chi2_point = (min_chi2_live if min_chi2_live["chi2"] < min_chi2_dead["chi2"]
                          else min_chi2_dead)
        min_chi2_point = min_chi2_dead
    print("The maximum likelihood (min chi^2) point reached is\n%r"%min_chi2_point)

The frequency of calls of the callback function is given by the compression_factor (see contents of polychord.yaml above).


Errors produced inside the callback function will be reported, but they will not stop PolyChord.


If you are getting an error whose cause is not immediately obvious, try substituting polychord by the dummy sampler evaluate.

If still in doubt, run with debug output and check what the prior and likelihood are getting and producing: either set debug to True for on-screen debug output, or to a file name to send the debug output to that file (and print only normal progress on screen). Alternatively, add a --debug flag to cobaya-run and pipe the output to a file with cobaya-run [input.yaml] --debug > file.

If everything seems to be working fine, but PolyChord is taking too long to converge, reduce the number of live points nlive to e.g. 10 per dimension, and the precision_criterion to 0.1 or so, and check that you get reasonable (but low-precision) sample and evidences.

See also Using PolyChord in cosmological runs.


Simply run cobaya-install polychord --packages-path [/path/to/packages] (or, instead of polychord after cobaya-install, mention an input file that uses polychord).

If PolyChord has been installed this way, it is not necessary to specify a path option for it.


To run PolyChord with MPI (highly recommended!) you need to make sure that MPI+Python is working in your system, see MPI parallelization (optional but encouraged!).

In addition, you need a MPI-wrapped Fortran compiler. You should have an MPI implementation installed if you followed the instructions to install mpi4py. On top of that, you need the Fortran compiler (we recommend the GNU one) and the development package of MPI. Use your system’s package manager to install them (sudo apt install gfortran libopenmpi-dev in Ubuntu/Debian systems), or contact your local IT service. If everything is correctly installed, you should be able to type mpif90 in the shell and not get a Command not found error.


Polychord for Mac users:

To have PolyChord work, install gcc (maybe using Homebrew), and check that gcc-[X] works in the terminal, where [X] is the version that you have just installed.

Now install PolyChord, either by hand or using cobaya’s automatic installer, go to the newly created PolyChord folder, and compile it with

$ make pypolychord MPI= CC=gcc-[X] CXX=g++-[X]
$ python setup.py build

If you want to use PolyChord with MPI on a Mac, you need to have compiled OpenMPI with the same gcc version with which you are compiling PolyChord. To do that, prepend OpenMPI’s make command with CC=gcc-[X], where [X] is your gcc version. Then follow the instructions above to compile PolyChord, but with MPI=1 instead when you do make pypolychord.

Thanks to Guadalupe Cañas Herrera for these instructions!

Polychord for Windows users:

Polychord currently does not support Windows. You’d have to run it in linux virtual machine or using a Docker container.

Manual installation of PolyChord

If you prefer to install PolyChord manually, assuming you want to install it at /path/to/polychord, simply do

$ cd /path/to/polychord
$ git clone https://github.com/PolyChord/PolyChordLite.git
$ cd PolyChordLite
$ make pypolychord MPI=1
$ python setup.py build

After this, mention the path in your input file as

    path: /path/to/polychord/PolyChordLite

PolyChord class


Interface for the PolyChord nested sampler


Will Handley, Mike Hobson and Anthony Lasenby (for PolyChord), Jesus Torrado (for the cobaya wrapper only)

class samplers.polychord.polychord(info_sampler, model, output=typing.Optional[cobaya.output.Output], packages_path=None, name=None)

PolyChord sampler cite{Handley:2015fda,2015MNRAS.453.4384H}, a nested sampler tailored for high-dimensional parameter spaces with a speed hierarchy.


Imports the PolyChord sampler and prepares its arguments.

dumper(live_points, dead_points, logweights, logZ, logZstd)

Preprocess output for the callback function and calls it, if present.


Prepares the prior and likelihood functions, calls PolyChord’s run, and processes its output.


Loads the sample of live points from PolyChord’s raw output and writes it (if txt output requested).

samples(combined=False, skip_samples=0, to_getdist=False)

Returns the sample of the posterior built out of dead points.

  • combined (bool, default: False) – If True returns the same, single posterior for all processes. Otherwise, it is only returned for the root process (this behaviour is kept for compatibility with the equivalent function for MCMC).

  • skip_samples (int or float, default: 0) – No effect (skipping initial samples from a sorted nested sampling sample would bias it). Raises a warning if greater than 0.

  • to_getdist (bool, default: False) – If True, returns a single getdist.MCSamples instance, containing all samples, for all MPI processes (combined is ignored).


The posterior sample.

Return type:

SampleCollection, getdist.MCSamples


Returns the samples corresponding to all clusters, if doing clustering, or None otherwise.


to_getdist (bool, default: False) – If True, returns the cluster samples as getdist.MCSamples.


The cluster posterior samples.

Return type:

None, Dict[int, Union[SampleCollection, MCSamples, None]]

products(combined=False, skip_samples=0, to_getdist=False)

Returns the products of the sampling process.

  • combined (bool, default: False) – If True returns the same, single posterior for all processes. Otherwise, it is only returned for the root process (this behaviour is kept for compatibility with the equivalent function for MCMC).

  • skip_samples (int or float, default: 0) – No effect (skipping initial samples from a sorted nested sampling sample would bias it). Raises a warning if greater than 0.

  • to_getdist (bool, default: False) – If True, returns getdist.MCSamples instances for the full posterior sample and the clusters, for all MPI processes (combined is ignored).


A dictionary containing the cobaya.collection.SampleCollection of accepted steps under "sample", the log-evidence and its uncertainty under logZ and logZstd respectively, and the same for the individual clusters, if present, under the clusters key.

Return type:

dict, None


If either combined or to_getdist are True, the same products dict is returned for all processes. Otherwise, None is returned for processes of rank larger than 0.

classmethod output_files_regexps(output, info=None, minimal=False)

Returns a list of tuples (regexp, root) of output files potentially produced. If root in the tuple is None, output.folder is used.

If minimal=True, returns regexp’s for the files that should really not be there when we are not resuming.

classmethod get_version()

Get version information for this component.


string or dict of values or None