Using the model
wrapper¶
In the last section we have seen how to run basic samples. Before we get to creating cosmological likelihoods, let us take a look at the model wrapper. It creates a python object that lets you interact with the different parts of your model: theory code, prior and likelihood. It can do the same as the evaluate sampler, and much more, in an interactive way.
You can use it to test your modifications, to evaluate the cosmological observables and likelihoods at particular points, and also to interface cosmological codes and likelihoods with external tools, such as a different sampler, a machinelearning framework…
Models are created from the same input as the run()
function: a dictionary containing the same blocks, except for the sampler
block (it can be included, but will be ignored).
So let us create a simple one using the input generator: Planck 2018 polarized CMB and lensing with CLASS as a theory code. Let us copy the python
version (you can also copy the yaml
version and load it with yaml_load()
).
info_txt = r"""
likelihood:
planck_2018_lowl.TT:
planck_2018_lowl.EE:
planck_2018_highl_plik.TTTEEE:
planck_2018_lensing.clik:
theory:
classy:
extra_args: {N_ur: 2.0328, N_ncdm: 1}
params:
logA:
prior: {min: 2, max: 4}
ref: {dist: norm, loc: 3.05, scale: 0.001}
proposal: 0.001
latex: \log(10^{10} A_\mathrm{s})
drop: true
A_s: {value: 'lambda logA: 1e10*np.exp(logA)', latex: 'A_\mathrm{s}'}
n_s:
prior: {min: 0.8, max: 1.2}
ref: {dist: norm, loc: 0.96, scale: 0.004}
proposal: 0.002
latex: n_\mathrm{s}
H0:
prior: {min: 40, max: 100}
ref: {dist: norm, loc: 70, scale: 2}
proposal: 2
latex: H_0
omega_b:
prior: {min: 0.005, max: 0.1}
ref: {dist: norm, loc: 0.0221, scale: 0.0001}
proposal: 0.0001
latex: \Omega_\mathrm{b} h^2
omega_cdm:
prior: {min: 0.001, max: 0.99}
ref: {dist: norm, loc: 0.12, scale: 0.001}
proposal: 0.0005
latex: \Omega_\mathrm{c} h^2
m_ncdm: {renames: mnu, value: 0.06}
Omega_Lambda: {latex: \Omega_\Lambda}
YHe: {latex: 'Y_\mathrm{P}'}
tau_reio:
prior: {min: 0.01, max: 0.8}
ref: {dist: norm, loc: 0.06, scale: 0.01}
proposal: 0.005
latex: \tau_\mathrm{reio}
"""
from cobaya.yaml import yaml_load
info = yaml_load(info_txt)
# Add your external packages installation folder
info['packages_path'] = '/path/to/packages'
Now let’s build a model (we will need the path to your external packages’ installation):
from cobaya.model import get_model
model = get_model(info)
To get (log) probabilities and derived parameters for particular parameter values, we can use the different methods of the model.Model
(see below), to which we pass a dictionary of sampled parameter values.
Note
Notice that we can only fix sampled parameters: in the example above, the primordial logamplitude logA
is sampled (has a prior) but the amplitude defined from it A_s
it not (it just acts as an interface between the sampler and CLASS
), so we can not pass it as an argument to the methods of model.Model
.
To get a list of the sampled parameters in use:
print(list(model.parameterization.sampled_params()))
Since there are lots of nuisance parameters (most of them coming from planck_2018_highl_plik.TTTEEE
), let us be a little lazy and give them random values, which will help us illustrate how to use the model to sample from the prior. To do that, we will use the method sample()
, where the class prior.Prior
is a member of the Model
. If we check out its documentation, we’ll notice that:
 It returns an array of samples (hence the
[0]
after the call below)  Samples are returned as arrays, not a dictionaries, so we need to add the parameter names, whose order corresponds to the output of
model.parameterization.sampled_params()
.  There is an external prior: a gaussian on a linear combination of SZ parameters inherited from the likelihood
planck_2018_highl_plik.TTTEEE
; thus, since we don’t know how to sample from it, we need to add theignore_external=True
keyword (mind that the returned samples are not samples from the full prior, but from the separable 1dimensional prior described in theparams
block.
We overwrite our prior sample with some cosmological parameter values of interest, and compute the logposterior (check out the documentation of logposterior()
below):
point = dict(zip(model.parameterization.sampled_params(),
model.prior.sample(ignore_external=True)[0]))
point.update({'omega_b': 0.0223, 'omega_cdm': 0.120, 'H0': 67.01712,
'logA': 3.06, 'n_s': 0.966, 'tau_reio': 0.065})
logposterior = model.logposterior(point)
print('Full logposterior:')
print(' logposterior: %g' % logposterior.logpost)
print(' logpriors: %r' % dict(zip(list(model.prior), logposterior.logpriors)))
print(' loglikelihoods: %r' % dict(zip(list(model.likelihood), logposterior.loglikes)))
print(' derived params: %r' % dict(zip(list(model.parameterization.derived_params()), logposterior.derived)))
And this will print something like
Full logposterior:
logposterior: 16826.4
logpriors: {'0': 21.068028542859743, 'SZ': 4.4079090809077135}
loglikelihoods: {'planck_2018_lowl.TT': 11.659342468489626, 'planck_2018_lowl.EE': 199.91123947242667, 'planck_2018_highl_plik.TTTEEE': 16584.946305616675, 'planck_2018_lensing.clik': 4.390708962961917}
derived params: {'A_s': 2.1327557162026904e09, 'Omega_Lambda': 0.68165001855872, 'YHe': 0.2453671312631958}
Note
0
is the name of the 1dimensional prior specified in the params
block.
Note
Notice that the logprobability methods of Model
can take, as well as a dictionary, an array of parameter values in the correct order. This may be useful when using these methods to interact with external codes.
Note
If we try to evaluate the posterior outside the prior bounds, logposterior()
will return an empty list for the likelihood values: likelihoods and derived parameters are only computed if the prior is nonnull, for the sake of efficiency.
If you would like to evaluate the likelihood for such a point, call loglikes()
instead.
Note
If you want to use any of the wrapper logprobability methods with an external code, especially with C or Fortran, consider setting the keyword make_finite=True
in those methods, which will return the largest (or smallest) machinerepresentable floating point numbers, instead of numpy
’s infinities.
We can also use the Model
to get the cosmological observables that were computed for the likelihood. To see what has been requested from, e.g., the camb theory code, do
print(model.requested())
Which will print something like
{classy: [{'Cl': {'pp': 2048, 'bb': 29, 'ee': 2508, 'tt': 2508, 'eb': 0, 'te': 2508, 'tb': 0}}]}
If we take a look at the documentation of must_provide()
, we will see that to request the power spectrum we would use the method get_Cl
:
Cls = model.provider.get_Cl(ell_factor=True)
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 6))
plt.plot(Cls["ell"][2:], Cls["tt"][2:])
plt.ylabel(r"$\ell(\ell+1)/(2\pi)\,C_\ell\;(\mu \mathrm{K}^2)$")
plt.xlabel(r"$\ell$")
plt.savefig("cltt.png")
# plt.show()
Warning
Cosmological observables requested this way always correspond to the last set of parameters with which the likelihood was evaluated.
If we want to request additional observables not already requested by the likelihoods, we can use the method must_provide()
of the theory code (check out its documentation for the syntax).
As a final example, let us request the Hubble parameter for a number of redshifts and plot both it and the power spectrum for a range of values of \(\Omega_\mathrm{CDM}h^2\):
# Request H(z)
import numpy as np
redshifts = np.linspace(0, 2.5, 40)
model.add_requirements({"Hubble": {"z": redshifts}})
omega_cdm = [0.10, 0.11, 0.12, 0.13, 0.14]
f, (ax_cl, ax_h) = plt.subplots(1, 2, figsize=(14, 6))
for o in omega_cdm:
point["omega_cdm"] = o
model.logposterior(point) # to force computation of theory
Cls = model.provider.get_Cl(ell_factor=True)
ax_cl.plot(Cls["ell"][2:], Cls["tt"][2:], label=r"$\Omega_\mathrm{CDM}h^2=%g$" % o)
H = model.provider.get_Hubble(redshifts)
ax_h.plot(redshifts, H / (1 + redshifts), label=r"$\Omega_\mathrm{CDM}h^2=%g$" % o)
ax_cl.set_ylabel(r"$\ell(\ell+1)/(2\pi)\,C_\ell\;(\mu \mathrm{K}^2)$")
ax_cl.set_xlabel(r"$\ell$")
ax_cl.legend()
ax_h.set_ylabel(r"$H/(1+z)\;(\mathrm{km}/\mathrm{s}/\mathrm{Mpc}^{1})$")
ax_h.set_xlabel(r"$z$")
ax_h.legend()
plt.savefig("omegacdm.png")
# plt.show()
If you are creating several models in a single session, call close()
after you are finished with each one, to release the memory it has used.
If you had set timing=True
in the input info, dump_timing()
would print the average computation time of your likelihoods and theory code (see the bottom of section Creating your own cosmological likelihood for a use example).
Note
If you are not really interested in any likelihood value and just want to get some cosmological observables (possibly using your modified version of a cosmological code), use the mock ‘one’ likelihood as the only likelihood, and add the requests for cosmological quantities by hand, as we did above with \(H(z)\).
NB: you will not be able to request some of the derived parameters unless you have requested some cosmological product to compute.
Warning
Unfortunately, not all likelihoods and cosmological codes are instancesafe, e.g. you can’t define two models using each the unbinned TT and TTTEEE likelihoods at the same time.
Lowlevel access to the theory code¶
You can access the imported CAMB module or CLASS ‘classy’ instance as, respectively, Model.theory["camb"].camb
and Model.theory["classy"].classy
. But be careful about manually changing their settings: it may unexpectedly influence subsequent cosmological observable computations for the present model instance. If you want to directly access CAMB’s results object, the likelihood can request ‘CAMBdata’ as a requirement and retrieve it from a likelihood using self.provider.get_CAMBdata()
.
Manually passing this model to a sampler¶
Once you have created a model, you can pass it to a sampler without needing to go through cobaya.run
, which would create yet another instance of the same model.
You can define a sampler and an optional output driver in the following way:
# Optional: define an output driver
from cobaya.output import get_output
out = get_output(prefix="chains/my_model", resume=False, force=True)
# Initialise and run the sampler
info_sampler = {"mcmc": {"burn_in": 0, "max_samples": 1}}
from cobaya.sampler import get_sampler
mcmc = get_sampler(info_sampler, model=model, output=out,
packages_path=info["packages_path"])
mcmc.run()
# Print results
print(mcmc.products()["sample"])
Model wrapper class¶

class
model.
Model
(info_params: Dict[str, Union[ParamDict, None, str, float, Sequence[float]]], info_likelihood: Dict[str, Union[None, Dict[str, Any], Type[CT_co], Callable]], info_prior: Optional[Dict[str, Union[str, Callable]]] = None, info_theory: Optional[Dict[str, Union[None, Dict[str, Any], Type[CT_co]]]] = None, packages_path=None, timing=None, allow_renames=True, stop_at_error=False, post=False, skip_unused_theories=False, dropped_theory_params: Optional[Iterable[str]] = None)¶ Class containing all the information necessary to compute the unnormalized posterior.
Allows for lowlevel interaction with the theory code, prior and likelihood.
NB: do not initialize this class directly; use
get_model()
instead, with some info as input.
info
() → Dict[str, Any]¶ Returns a copy of the information used to create the model, including defaults and some new values that are only available after initialisation.

logpriors
(params_values, make_finite=False) → numpy.ndarray¶ Takes an array or dictionary of sampled parameter values. If the argument is an array, parameters must have the same order as in the input. When in doubt, you can get the correct order as
list([your_model].parameterization.sampled_params())
.Returns the logvalues of the priors, in the same order as it is returned by
list([your_model].prior)
. The first one, named0
, corresponds to the product of the 1dimensional priors specified in theparams
block, and it’s normalized (in general, the external prior densities aren’t).If
make_finite=True
, it will try to represent infinities as the largest real numbers allowed by machine precision.

logprior
(params_values, make_finite=False)¶ Takes an array or dictionary of sampled parameter values. If the argument is an array, parameters must have the same order as in the input. When in doubt, you can get the correct order as
list([your_model].parameterization.sampled_params())
.Returns the logvalue of the prior (in general, unnormalized, unless the only priors specified are the 1dimensional ones in the
params
block).If
make_finite=True
, it will try to represent infinities as the largest real numbers allowed by machine precision.

loglikes
(params_values=None, return_derived=True, make_finite=False, cached=True)¶ Takes an array or dictionary of sampled parameter values. If the argument is an array, parameters must have the same order as in the input. When in doubt, you can get the correct order as
list([your_model].parameterization.sampled_params())
.Returns a tuple
(loglikes, derived_params)
, whereloglikes
are the logvalues of the likelihoods (unnormalized, in general) in the same order as it is returned bylist([your_model].likelihood)
, andderived_params
are the values of the derived parameters in the order given bylist([your_model].parameterization.derived_params())
.To return just the list of loglikelihood values, make
return_derived=False
.If
make_finite=True
, it will try to represent infinities as the largest real numbers allowed by machine precision.If
cached=False
(default: True), it ignores previously computed results that could be reused.

loglike
(params_values=None, return_derived=True, make_finite=False, cached=True)¶ Takes an array or dictionary of sampled parameter values. If the argument is an array, parameters must have the same order as in the input. When in doubt, you can get the correct order as
list([your_model].parameterization.sampled_params())
.Returns a tuple
(loglike, derived_params)
, whereloglike
is the logvalue of the likelihood (unnormalized, in general), andderived_params
are the values of the derived parameters in the order given bylist([your_model].parameterization.derived_params())
. If the model contains multiple likelihoods, the sum of the loglikes is returned.To return just the list of loglikelihood values, make
return_derived=False
.If
make_finite=True
, it will try to represent infinities as the largest real numbers allowed by machine precision.If
cached=False
(default: True), it ignores previously computed results that could be reused.

logposterior
(params_values, return_derived=True, make_finite=False, cached=True, _no_check=False) → model.LogPosterior¶ Takes an array or dictionary of sampled parameter values. If the argument is an array, parameters must have the same order as in the input. When in doubt, you can get the correct order as
list([your_model].parameterization.sampled_params())
.Returns the a
logposterior
NamedTuple
, with the following fields:logpost
: logvalue of the posterior.logpriors
: logvalues of the priors, in the same order as inlist([your_model].prior)
. The first one, corresponds to the product of the 1dimensional priors specified in theparams
block. Except for the first one, the priors are unnormalized.loglikes
: logvalues of the likelihoods (unnormalized, in general), in the same order as inlist([your_model].likelihood)
.derived
: values of the derived parameters in the order given bylist([your_model].parameterization.derived_params())
.
Only computes the loglikelihood and the derived parameters if the prior is nonnull (otherwise the fields
loglikes
andderived
are empty lists).To ignore the derived parameters, make
return_derived=False
.If
make_finite=True
, it will try to represent infinities as the largest real numbers allowed by machine precision.If
cached=False
(default: True), it ignores previously computed results that could be reused.

logpost
(params_values, make_finite=False, cached=True) → float¶ Takes an array or dictionary of sampled parameter values. If the argument is an array, parameters must have the same order as in the input. When in doubt, you can get the correct order as
list([your_model].parameterization.sampled_params())
.Returns the logvalue of the posterior.
If
make_finite=True
, it will try to represent infinities as the largest real numbers allowed by machine precision.If
cached=False
(default: True), it ignores previously computed results that could be reused.

get_valid_point
(max_tries, ignore_fixed_ref=False, random_state=None)¶ Finds a point with finite posterior, sampled from from the reference pdf.
It will fail if no valid point is found after max_tries.
If ignored_fixed_ref=True (default: False), fixed reference values will be ignored in favor of the full prior, ensuring some randomness for all parameters (useful e.g. to prevent caching when measuring speeds).
Returns (point, LogPosterior(logpost, logpriors, loglikes, derived))

dump_timing
()¶ Prints the average computation time of the theory code and likelihoods.
It’s more reliable the more times the likelihood has been evaluated.

add_requirements
(requirements)¶ Adds quantities to be computed by the pipeline, for testing purposes.

requested
()¶ Get all the requested requirements that will be computed.
Returns: dictionary giving list of requirements calculated by each component name

get_param_blocking_for_sampler
(split_fast_slow=False, oversample_power=0)¶ Separates the sampled parameters in blocks according to the component(s) reevaluation(s) that changing each one of them involves. Then sorts these blocks in an optimal way using the speed (i.e. inverse evaluation time in seconds) of each component.
Returns tuples of
(params_in_block), (oversample_factor)
, sorted by descending variation costperparameter.Set
oversample_power
to some value between 0 and 1 to control the amount of oversampling (default: 0 – no oversampling). If close enough to 1, it chooses oversampling factors such that the same amount of time is spent in each block.If
split_fast_slow=True
, it separates blocks in two sets, only the second one having an oversampling factor >1. In that case, the oversampling factor should be understood as the total one for all the fast blocks.

check_blocking
(blocking)¶ Checks the correct formatting of the given parameter blocking and oversampling: that it consists of tuples (oversampling_factor, (param1, param2, etc)), with growing oversampling factors, and containing all parameters.
Returns the input, once checked as
(blocks), (oversampling_factors)
.If
draggable=True
(default:False
), checks that the oversampling factors are compatible with dragging.

set_cache_size
(n_states)¶ Sets the number of different parameter points to cache for all theories and likelihoods.
Parameters: n_states – number of cached points

get_auto_covmat
(params_info=None, random_state=None)¶ Tries to get an automatic covariance matrix for the current model and data.
params_info
should include the set of parameters for which the covmat will be searched (default: None, meaning all sampled parameters).

measure_and_set_speeds
(n=None, discard=1, max_tries=inf, random_state=None)¶ Measures the speeds of the different components (theories and likelihoods). To do that it evaluates the posterior at n points (default: 1 per MPI process, or 3 if single process), discarding discard points (default: 1) to mitigate possible internal caching.
Stops after encountering max_tries points (default: inf) with nonfinite posterior.
