Creating your own cosmological likelihood

Creating your own cosmological likelihood with cobaya is super simple:

  1. Define your likelihood as a function that takes some parameters (experimental errors, foregrounds, etc, but not theory parameters) and returns a log-likelihood.
  2. Take note of the observables and other cosmological quantities that you will need to request from the theory code. Look them up in the needs() (clickable!) method documentation. [If you cannot find the observable that you need, do let us know!]
  3. Add to your likelihood function definition a keyword argument named _theory, and assign to it all the cosmological requirements as a dictionary (e.g. _theory={'cl': {'tt': 2500}}).
  4. Now you can get theory products inside your function calling the get_[...] methods of the cosmo interface (clickable!) as e.g _theory.get_Cl(). The value of the theory products corresponds to the current set of theory parameter values that were generated by the sampler.
  5. To include your likelihood in the input info or yaml file, or to add derived parameters, do it as for general external likelihoods (example here).

Example: your own CMB experiment!

To illustrate how to create a cosmological likelihood in cobaya, we apply the procedure above to a fictitious WMAP-era full-sky CMB TT experiment.

First of all, we will need to simulate the fictitious power spectrum of the fictitious sky that we would measure with our fictitious experiment, once we have accounted for noise and beam corrections. To do that, we choose a set of true cosmological parameters in the sky, and then use a model to compute the corresponding power spectrum, up to some reasonable \(\ell_\mathrm{max}\) (see Using the model wrapper).

fiducial_params = {
    'ombh2': 0.022, 'omch2': 0.12, 'H0': 68, 'tau': 0.07,
    'As': 2.2e-9, 'ns': 0.96,
    'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1}

l_max = 1000

modules_path = '/path/to/your/modules'

info_fiducial = {
    'params': fiducial_params,
    'likelihood': {'one': None},
    'theory': {'camb': None},
    'modules': modules_path}

from cobaya.model import get_model
model_fiducial = get_model(info_fiducial)

# Declare our desired theory product
# (there is no cosmological likelihood doing it for us)
model_fiducial.likelihood.theory.needs(Cl={'tt': l_max})

# Compute and extract the CMB power spectrum
# (In muK^-2, without l(l+1)/(2pi) factor)
# notice the empty dictionary below: all parameters are fixed
Cls = model_fiducial.likelihood.theory.get_Cl(ell_factor=False)

# Our fiducial power spectrum
Cl_est = Cls['tt'][:l_max+1]

Now, let us define the likelihood. The arguments of the likelihood function will contain the parameters that we want to vary (arguments not mentioned later in an input info will be left to their default, e.g. beam_FWHM=0.25). As mentioned above, there are two special keywords:

  • _theory:
    • In the function declaration, use it to name the theory products that you will need. To do that, set it to a dictionary containing the relevant arguments of the theory method needs() (clickable!).
    • Inside the function, assume it’s an instance of your theory code, and simply call its methods as needed to get observables and parameters.
  • _derived:
    • In the function declaration, set it to a dictionary whose keys are the ‘available’ derived params.
    • Inside the function, assume that you have been passed an existing dictionary and add to it the derived parameters with their value.
import numpy as np
import matplotlib.pyplot as plt

def my_like(
        # Parameters that we may sample over (or not)
        noise_std_pixel=20,  # muK
        beam_FWHM=0.25,  # deg
        # Declaration of our theory requirements
        _theory={'cl': {'tt': l_max}},
        # Declaration of available derived parameters
        _derived={'Map_Cl_at_500': None}):
    # Noise spectrum, beam-corrected
    pixel_area_rad = np.pi/(3*healpix_Nside**2)
    weight_per_solid_angle = (noise_std_pixel**2 * pixel_area_rad)**-1
    beam_sigma_rad = beam_FWHM / np.sqrt(8*np.log(2)) * np.pi/180.
    ells = np.arange(l_max+1)
    Nl = np.exp((ells*beam_sigma_rad)**2)/weight_per_solid_angle
    # Cl of the map: data + noise
    Cl_map = Cl_est + Nl
    # Cl from theory: treat '_theory' as a 'theory code instance'
    Cl_theo = _theory.get_Cl(ell_factor=False)['tt'][:l_max+1]  # muK-2
    Cl_map_theo = Cl_theo + Nl
    # Set our derived parameter, assuming '_derived' is a dictionary
    _derived['Map_Cl_at_500'] = Cl_map[500]
    # Auxiliary plot
    #ell_factor = ells*(ells+1)/(2*np.pi)
    #plt.plot(ells[2:], (Cl_theo*ell_factor)[2:], label=r'Theory $C_\ell$')
    #plt.plot(ells[2:], (Cl_est*ell_factor)[2:], label=r'Estimated $C_\ell$')
    #plt.plot(ells[2:], (Cl_map*ell_factor)[2:], label=r'Map $C_\ell$')
    #plt.plot(ells[2:], (Nl*ell_factor)[2:], label='Noise')
    #plt.ylim([0, 6000])
    # ----------------
    # Compute the log-likelihood
    V = Cl_map[2:]/Cl_map_theo[2:]
    return np.sum((2*ells[2:]+1)*(-V/2 +1/2.*np.log(V)))

Now we are ready to do something with it. Since our imaginary experiment isn’t very powerful, we will refrain from trying to estimate the full \(\Lambda\) CDM parameter set. We may focus instead e.g. on the primordial power spectrum parameters \(A_s\) and \(n_s\), and assume that we magically have accurate values for the rest of the cosmological parameters.

To illustrate the use of likelihood parameters, we will try to marginalise over some uncertainty on the noise standard deviation. We will also get the derived parameter defined in the likelihood: map power spectrum at \(\ell=500\).

info = {
    'params': {
        # Fixed
        'ombh2': 0.022, 'omch2': 0.12, 'H0': 68, 'tau': 0.07,
        'mnu': 0.06, 'nnu': 3.046, 'num_massive_neutrinos': 1,
        # Sampled
        'As': {'prior': {'min': 1e-9, 'max': 4e-9}, 'latex': 'A_s'},
        'ns': {'prior': {'min': 0.9, 'max': 1.1}, 'latex': 'n_s'},
        'noise_std_pixel': {
            'prior': {'dist': 'norm', 'loc': 20, 'scale': 5},
            'latex': r'\sigma_\mathrm{pix}'},
        # Derived
        'Map_Cl_at_500': {'latex': r'C_{500,\,\mathrm{map}}'}},
    'likelihood': {'my_cl_like': my_like},
    'theory': {'camb': {'stop_at_error': True}},
    'sampler': {'mcmc': None},  # or polychord...
    'modules': modules_path,
    'output': 'chains/my_imaginary_cmb'}

But first of all, we will test it using a Model:

# Activate timing (we will use it later)
info['timing'] = True

from cobaya.model import get_model
model = get_model(info)

And now we can e.g. plot a slice of the log likelihood along different \(A_s\) values:

As = np.linspace(1e-9, 4e-9, 10)
likes = [model.loglike({'As': A, 'ns': 0.96, 'noise_std_pixel': 20})[0] for A in As]

plt.plot(As, likes)



If you are not getting the expected value for the likelihood, here are a couple of thing that you can try:

  • Set debug: True in the input, which will cause cobaya to print much more information, e.g. the parameter values are passed to the prior, the theory code and the likelihood.
  • If the likelihood evaluates to -inf (but the prior is finite) it probably means that either the theory code or the likelihood are failing; to display the error information of the theory code, add to it the stop_at_error: True option, as shown in the example input above, and the same for the likelihood, if it is likely to throw errors.

Before we start sampling, it is a good idea to characterize the speed of your likelihood, so that the sampler can behave more efficiently. To do that, set timing: True in the input before initialising your model (as we did above), evaluate the likelihood a couple of times (as we did for the log-likelihood plot above), and close the model as model.close(). This will print the evaluation time (in seconds) of the theory code and the likelihoods. Now, redefine the likelihood in the input to add the speed, which is the inverse of the evaluation time in seconds, e.g. if that was \(2\,\mathrm{ms}\):

info['likelihood']['my_cl_like'] = {
    'external': my_like,
    'speed': 500}

Now we can start sampling. To do that, you can save all the definitions above in a .py file, that you will run directly with python. You will need to add the following lines at the end:

from import run

Alternatively, specially if you are planning to share your likelihood, you can put its definition (including the fiducial spectrum, maybe saved as a table separately) in a separate file, say In this case, to use it, use import_module([your_file_without_extension]).your_function, here

# Contents of some .yaml input file
        external: import_module('my_like_file').my_like
        speed: 500