# Basic cosmology runs¶

Sampling from a cosmological posterior works the same way as the examples at the beginning of the documentation, except that one usually needs to add a theory code, and possibly some of the cosmological likelihoods presented later.

You can sample or track any parameter that is understood by the theory code in use (or any dynamical redefinition of those). You **do not need to modify Cobaya’s source** to use new parameters that you have created by modifying CLASS or modifying CAMB, or to create a new cosmological likelihood and track its parameters.

Creating *from scratch* the input for a realistic cosmological case is quite a bit of work. But to make it simpler, we have created an automatic **input generator**, that you can run from the shell as:

```
$ cobaya-cosmo-generator
```

Note

If `PySide2`

is not installed, this will fail. To fix it:

```
$ python -m pip install pyqt5 pyside2
```

**Anaconda** users should instead do:

```
$ conda install -c conda-forge pyside2
```

Start by choosing a preset, maybe modify some aspects using the options provided, and copy or save the generated input to a file, either in `yaml`

form or as a python dictionary.

The parameter combinations and options included in the input generator are in general well-tested, but they are only suggestions: **you can add by hand any parameter that your theory code or likelihood can understand, or modify any setting**.

Don’t forget to add the installation path for the cosmological requisites `packages_path: '/path/to/packages'`

, and an `output`

prefix if you wish.

As an example, here is the input for Planck 2015 base \(\Lambda\mathrm{CDM}\), both for CLASS and CAMB:

```
theory:
camb:
extra_args:
halofit_version: mead
bbn_predictor: PArthENoPE_880.2_standard.dat
lens_potential_accuracy: 1
num_massive_neutrinos: 1
nnu: 3.046
theta_H0_range:
- 20
- 100
likelihood:
planck_2018_lowl.TT: null
planck_2018_lowl.EE: null
planck_2018_highl_plik.TTTEEE: null
planck_2018_lensing.clik: null
params:
logA:
prior:
min: 1.61
max: 3.91
ref:
dist: norm
loc: 3.05
scale: 0.001
proposal: 0.001
latex: \log(10^{10} A_\mathrm{s})
drop: true
As:
value: 'lambda logA: 1e-10*np.exp(logA)'
latex: A_\mathrm{s}
ns:
prior:
min: 0.8
max: 1.2
ref:
dist: norm
loc: 0.965
scale: 0.004
proposal: 0.002
latex: n_\mathrm{s}
theta_MC_100:
prior:
min: 0.5
max: 10
ref:
dist: norm
loc: 1.04109
scale: 0.0004
proposal: 0.0002
latex: 100\theta_\mathrm{MC}
drop: true
renames: theta
cosmomc_theta:
value: 'lambda theta_MC_100: 1.e-2*theta_MC_100'
derived: false
H0:
latex: H_0
min: 20
max: 100
ombh2:
prior:
min: 0.005
max: 0.1
ref:
dist: norm
loc: 0.0224
scale: 0.0001
proposal: 0.0001
latex: \Omega_\mathrm{b} h^2
omch2:
prior:
min: 0.001
max: 0.99
ref:
dist: norm
loc: 0.12
scale: 0.001
proposal: 0.0005
latex: \Omega_\mathrm{c} h^2
omegam:
latex: \Omega_\mathrm{m}
omegamh2:
derived: 'lambda omegam, H0: omegam*(H0/100)**2'
latex: \Omega_\mathrm{m} h^2
mnu: 0.06
omega_de:
latex: \Omega_\Lambda
YHe:
latex: Y_\mathrm{P}
Y_p:
latex: Y_P^\mathrm{BBN}
DHBBN:
derived: 'lambda DH: 10**5*DH'
latex: 10^5 \mathrm{D}/\mathrm{H}
tau:
prior:
min: 0.01
max: 0.8
ref:
dist: norm
loc: 0.055
scale: 0.006
proposal: 0.003
latex: \tau_\mathrm{reio}
zre:
latex: z_\mathrm{re}
sigma8:
latex: \sigma_8
s8h5:
derived: 'lambda sigma8, H0: sigma8*(H0*1e-2)**(-0.5)'
latex: \sigma_8/h^{0.5}
s8omegamp5:
derived: 'lambda sigma8, omegam: sigma8*omegam**0.5'
latex: \sigma_8 \Omega_\mathrm{m}^{0.5}
s8omegamp25:
derived: 'lambda sigma8, omegam: sigma8*omegam**0.25'
latex: \sigma_8 \Omega_\mathrm{m}^{0.25}
A:
derived: 'lambda As: 1e9*As'
latex: 10^9 A_\mathrm{s}
clamp:
derived: 'lambda As, tau: 1e9*As*np.exp(-2*tau)'
latex: 10^9 A_\mathrm{s} e^{-2\tau}
age:
latex: '{\rm{Age}}/\mathrm{Gyr}'
rdrag:
latex: r_\mathrm{drag}
sampler:
mcmc:
covmat: auto
drag: true
oversample_power: 0.4
proposal_scale: 1.9
```

```
theory:
classy:
extra_args:
non linear: hmcode
hmcode_min_k_max: 20
N_ncdm: 1
N_ur: 2.0328
likelihood:
planck_2018_lowl.TT: null
planck_2018_lowl.EE: null
planck_2018_highl_plik.TTTEEE: null
planck_2018_lensing.clik: null
params:
logA:
prior:
min: 1.61
max: 3.91
ref:
dist: norm
loc: 3.05
scale: 0.001
proposal: 0.001
latex: \log(10^{10} A_\mathrm{s})
drop: true
A_s:
value: 'lambda logA: 1e-10*np.exp(logA)'
latex: A_\mathrm{s}
n_s:
prior:
min: 0.8
max: 1.2
ref:
dist: norm
loc: 0.965
scale: 0.004
proposal: 0.002
latex: n_\mathrm{s}
theta_s_1e2:
prior:
min: 0.5
max: 10
ref:
dist: norm
loc: 1.0416
scale: 0.0004
proposal: 0.0002
latex: 100\theta_\mathrm{s}
drop: true
100*theta_s:
value: 'lambda theta_s_1e2: theta_s_1e2'
derived: false
H0:
latex: H_0
omega_b:
prior:
min: 0.005
max: 0.1
ref:
dist: norm
loc: 0.0224
scale: 0.0001
proposal: 0.0001
latex: \Omega_\mathrm{b} h^2
omega_cdm:
prior:
min: 0.001
max: 0.99
ref:
dist: norm
loc: 0.12
scale: 0.001
proposal: 0.0005
latex: \Omega_\mathrm{c} h^2
Omega_m:
latex: \Omega_\mathrm{m}
omegamh2:
derived: 'lambda Omega_m, H0: Omega_m*(H0/100)**2'
latex: \Omega_\mathrm{m} h^2
m_ncdm:
value: 0.06
renames: mnu
Omega_Lambda:
latex: \Omega_\Lambda
YHe:
latex: Y_\mathrm{P}
tau_reio:
prior:
min: 0.01
max: 0.8
ref:
dist: norm
loc: 0.055
scale: 0.006
proposal: 0.003
latex: \tau_\mathrm{reio}
z_reio:
latex: z_\mathrm{re}
sigma8:
latex: \sigma_8
s8h5:
derived: 'lambda sigma8, H0: sigma8*(H0*1e-2)**(-0.5)'
latex: \sigma_8/h^{0.5}
s8omegamp5:
derived: 'lambda sigma8, Omega_m: sigma8*Omega_m**0.5'
latex: \sigma_8 \Omega_\mathrm{m}^{0.5}
s8omegamp25:
derived: 'lambda sigma8, Omega_m: sigma8*Omega_m**0.25'
latex: \sigma_8 \Omega_\mathrm{m}^{0.25}
A:
derived: 'lambda A_s: 1e9*A_s'
latex: 10^9 A_\mathrm{s}
clamp:
derived: 'lambda A_s, tau_reio: 1e9*A_s*np.exp(-2*tau_reio)'
latex: 10^9 A_\mathrm{s} e^{-2\tau}
age:
latex: '{\rm{Age}}/\mathrm{Gyr}'
rs_drag:
latex: r_\mathrm{drag}
sampler:
mcmc:
covmat: auto
drag: true
oversample_power: 0.4
proposal_scale: 1.9
```

Note

Note that Planck likelihood parameters (or *nuisance parameters*) do not appear in the input: they are included automatically at run time. The same goes for all *internal* likelihoods (i.e. those listed below in the table of contents).

You can still add them to the input, if you want to redefine any of their properties (its prior, label, etc.). See Changing and redefining parameters; inheritance.

Save the input generated to a file and run it with `cobaya-run [your_input_file_name.yaml]`

. This will create output files as explained here, and, after some time, you should be able to run `getdist-gui`

to generate some plots.

Note

You may want to start with a *test run*, adding `--test`

to `cobaya-run`

. It will initialise all components (cosmological theory code and likelihoods, and the sampler) and exit.

Typical running times for MCMC when using computationally heavy likelihoods (e.g. those involving \(C_\ell\), or non-linear \(P(k,z)\) for several redshifts) are ~10 hours running 4 MPI processes with 4 OpenMP threads per process, provided that the initial covariance matrix is a good approximation to the one of the real posterior (Cobaya tries to select it automatically from a database; check the `[mcmc]`

output towards the top to see if it succeeded), or a few hours on top of that if the initial covariance matrix is not a good approximation.

It is much harder to provide typical PolyChord running times. We recommend starting with a low number of live points and a low convergence tolerance, and build up from there towards PolyChord’s default settings (or higher, if needed).

## Post-processing cosmological samples¶

Let’s suppose that we want to importance-reweight a Plank sample, in particular the one we just generated with the input above, with some late time LSS data from BAO. To do that, we `add`

the new BAO likelihoods. We would also like to increase the theory code’s precision with some extra arguments: we will need to re-`add`

it, and set the new precision parameter under `extra_args`

(the old `extra_args`

will be inherited, unless specifically redefined). Since we do not need to recompute the CMB likelihoods, which are not too affected by the new precision parameter. On top of that, let us add a derived parameter.

Assuming we saved the sample at `chains/planck`

, we need to define the following input file, which we can run with `$ cobaya-run`

:

```
# Path the original sample
output: chains/planck
# Post-processing information
post:
suffix: BAO # the new sample will be called "chains\planck_post_des*"
# If we want to skip the first third and take 1 every 3 samples
skip: 0.3
thin: 3
# Now let's add the DES likelihood,
# increase the precision (remember to repeat the extra_args)
# and add the new derived parameter
add:
likelihood:
sixdf_2011_bao:
sdss_dr7_mgs:
sdss_dr12_consensus_bao:
theory:
# Use *only* the theory corresponding to the original sample
classy:
extra_args:
# New precision parameter
# [option]: [value]
camb:
extra_args:
# New precision parameter
# [option]: [value]
params:
# h = H0/100. (nothing to add: CLASS/CAMB knows it)
h:
# A dynamic derived parameter: sum of BAO chi-squared's
chi2__BAO:
derived: 'lambda chi2__sixdf_2011_bao, chi2__sdss_dr7_mgs, chi2__sdss_dr12_consensus_bao:
sum([chi2__sixdf_2011_bao, chi2__sdss_dr7_mgs, chi2__sdss_dr12_consensus_bao])'
latex: \chi^2_\mathrm{BAO}
```

Warning

In the current implementation, likelihood recomputation does not automatically trigger recomputation of the partial “chi2” sums as the one in the basic Planck examples above, `chi2__cmb`

. If you are recomputing one likelihood that is part of a partial sum, you need to re-define them inside the `add`

block.

## Getting help and bibliography for a component¶

If you want to get the available options with their default values for a given component, use

```
$ cobaya-doc [component_name]
```

If the component name is not unique (i.e. there are more than one component with the same name but different kinds), use the option `--kind [component_kind]`

to specify its kind: `sampler`

, `theory`

or `likelihood`

.

Call `$ cobaya-doc`

with a kind instead of a component name (e.g. `$ cobaya-doc likelihood`

) to get a list of components of that kind. Call with no arguments to get all available components of all kinds.

If you would like to cite the results of a run in a paper, you would need citations for all the different parts of the process. In the example above that would be this very sampling framework, the MCMC sampler, the CAMB or CLASS cosmological code and the Planck 2018 likelihoods.

The `bibtex`

for those citations, along with a short text snippet for each element, can be easily obtained and saved to some `output_file.tex`

with

```
$ cobaya-bib [your_input_file_name.yaml] > output_file.tex
```

You can pass multiple input files this way, or even a (list of) component name(s), as in `cobaya-doc`

.

You can also do this interactively, by passing your input info, as a python dictionary, to the function `citation()`

:

```
from cobaya.bib import get_bib_info
get_bib_info(info)
```

Note

Both defaults and bibliography are available in the **GUI** (menu `Show defaults and bibliography for a component ...`

).

Bibliography for *preset* input files is displayed in the `bibliography`

tab.