Skip to content

Commit

Permalink
doc edits; trivial, plus mostly mcmc
Browse files Browse the repository at this point in the history
  • Loading branch information
cmbant committed Sep 17, 2018
1 parent f1d2187 commit 6495e00
Show file tree
Hide file tree
Showing 23 changed files with 79 additions and 56 deletions.
2 changes: 1 addition & 1 deletion cobaya/collection.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ def __getitem__(self, *args):
check_end(args[0].stop, self._n)
return self.data.iloc[args[0]]
else:
raise ValueError("Index type not recognised: use column names or slices.")
raise ValueError("Index type not recognized: use column names or slices.")

# Statistical computations
def mean(self, first=None, last=None, derived=False):
Expand Down
4 changes: 2 additions & 2 deletions cobaya/likelihood.py
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ def logps(self, input_params, _derived=None, cached=True):
"""
Computes observables and returns the (log) likelihoods *separately*.
It takes a list of **input** parameter values, in the same order as they appear
in the `OrderedDictionary` of the :class:LikelihoodCollection.
in the `OrderedDictionary` of the :class:`LikelihoodCollection`.
To compute the derived parameters, it takes an optional keyword `_derived` as an
empty list, which is then populated with the derived parameter values.
"""
Expand Down Expand Up @@ -363,7 +363,7 @@ def logp(self, input_params, _derived=None, cached=True):
"""
Computes observables and returns the (log) likelihood.
It takes a list of **sampled** parameter values, in the same order as they appear
in the `OrderedDictionary` of the :class:LikelihoodCollection.
in the `OrderedDictionary` of the :class:`LikelihoodCollection`.
To compute the derived parameters, it takes an optional keyword `_derived` as an
empty list, which is then populated with the derived parameter values.
"""
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Lensing likelihood of Planck's 2015 data release based on temperature+polarisation map-based lensing reconstruction \cite{Ade:2015zua}.
Lensing likelihood of Planck's 2015 data release based on temperature+polarization map-based lensing reconstruction \cite{Ade:2015zua}.
@article{Ade:2015zua,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Lensing likelihood of Planck's 2015 data release based on temperature+polarisation map-based lensing reconstruction \cite{Ade:2015zua}.
Lensing likelihood of Planck's 2015 data release based on temperature+polarization map-based lensing reconstruction \cite{Ade:2015zua}.
@article{Ade:2015zua,
Expand Down
2 changes: 1 addition & 1 deletion cobaya/likelihoods/sn_jla/sn_jla.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Settings for JLA supernova sample (joint SNLS/SDSS SN Ia data)
# (For the marginalised version, use 'sn_jla_lite')
# (For the marginalized version, use 'sn_jla_lite')

likelihood:
sn_jla:
Expand Down
2 changes: 1 addition & 1 deletion cobaya/likelihoods/sn_jla_lite/sn_jla_lite.bibtex
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Likelihood (marginalised over nuisance parameters) of the JLA type Ia supernova sample \cite{Betoule:2014frx}, based on observations obtained by the SDSS-II and SNLS collaborations.
Likelihood (marginalized over nuisance parameters) of the JLA type Ia supernova sample \cite{Betoule:2014frx}, based on observations obtained by the SDSS-II and SNLS collaborations.
@article{Betoule:2014frx,
Expand Down
4 changes: 2 additions & 2 deletions cobaya/likelihoods/sn_jla_lite/sn_jla_lite.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Settings for JLA supernova sample (joint SNLS/SDSS SN Ia data)
# Marginalised version (useful e.g. for importance sampling)
# NB: different chi2 normalisation from the non-normalised version
# Marginalized version (useful e.g. for importance sampling)
# NB: different chi2 normalization from the non-normalized version

likelihood:
sn_jla_lite:
Expand Down
2 changes: 1 addition & 1 deletion cobaya/model.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
.. module:: model
:Synopsis: Wrapper for models: parametrisation+prior+likelihood+theory
:Synopsis: Wrapper for models: parameterization+prior+likelihood+theory
:Author: Jesus Torrado
"""
Expand Down
8 changes: 4 additions & 4 deletions cobaya/prior.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,9 +163,9 @@
The one-dimensional priors defined within the ``params`` block are automatically
normalized, so any sampler that computes the evidence will produce the right results as
long as no external priors have been defined, whose normalisation is unknown.
long as no external priors have been defined, whose normalization is unknown.
To get the prior normalisation if using external functions as priors, you can substitute
To get the prior normalization if using external functions as priors, you can substitute
your likelihood by the :doc:`dummy unit likelihood <likelihood_one>`, and make an initial
run with :doc:`PolyChord <sampler_polychord>` to get the prior volume
(see section :ref:`polychord_bayes_ratios`).
Expand Down Expand Up @@ -196,7 +196,7 @@
:ref:`the end of the example <example_advanced_rtheta>`.
Let us discuss the general case here.
To enble this, **cobaya** creates a `re-parametrisation` layer between the `sampled`
To enble this, **cobaya** creates a `re-parameterization` layer between the `sampled`
parameters, and the `input` parameters of the likelihood. E.g. if we want to **sample**
from the logarithm of an **input** parameter of the likelihood, we would do:
Expand Down Expand Up @@ -231,7 +231,7 @@
.. note::
**Dynamical derived** parameters can also be functions of yet-undefined parameters.
In that case, those parameters will be automatically requested to the likelihood (or
In that case, those parameters will be automatically requested from the likelihood (or
theory code) that understands them.
.. note::
Expand Down
8 changes: 4 additions & 4 deletions cobaya/sampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
:Synopsis: Prototype sampler class and sampler loader
:Author: Jesus Torrado
cobaya includes by default an
:doc:`advanced Monte Carlo Markov Chain (MCMC) sampler <sampler_mcmc>`
(a direct translation from `CosmoMC <https://cosmologist.info/cosmomc/>`_ and a dummy
:doc:`evaluate <sampler_evaluate>` sampler, that simply evaluates the posterior at a given
cobaya includes by default a
:doc:`Monte Carlo Markov Chain (MCMC) sampler <sampler_mcmc>`
(a direct translation from `CosmoMC <https://cosmologist.info/cosmomc/>`_) and a dummy
:doc:`evaluate <sampler_evaluate>` sampler that simply evaluates the posterior at a given
(or sampled) reference point. It also includes an interface to the
:doc:`PolyChord sampler <sampler_polychord>` (needs to be installed separately).
Expand Down
2 changes: 1 addition & 1 deletion cobaya/samplers/minimize/minimize.bibtex
Original file line number Diff line number Diff line change
@@ -1 +1 @@
The posterior has been maximised using \href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html}{\texttt{scipy.optimize.minimize}}
The posterior has been maximized using \href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html}{\texttt{scipy.optimize.minimize}}
2 changes: 1 addition & 1 deletion cobaya/samplers/minimize/minimize.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ sampler:
# Minimizes the full posterior (False) or just the likelihood (True)
ignore_prior: False
# Tolerance for termination. Default: 0.001
# --> expected >0.01 deviation, reasonable value for Chi^2 minimisation
# --> expected >0.01 deviation, reasonable value for Chi^2 minimization
tol: 1e-3
# Maximum number of iterations. Default: practically infinite
maxiter: 1e10
Expand Down
2 changes: 1 addition & 1 deletion cobaya/theories/_cosmo/_cosmo.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ def get_fsigma8(self, z):

def get_auto_covmat(self, params_info, likes_info):
"""
Tries to get an automatic covariance matrix for the current model and data.
Tries to get match to a database of existing covariance matrix files for the current model and data.
``params_info`` should contain preferably the slow parameters only.
"""
Expand Down
9 changes: 6 additions & 3 deletions cobaya/theories/camb/camb.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@
^^^^^^^^^^^^^^
**cobaya** calls CAMB using its Python interface, which requires that you compile CAMB
using the GNU gfortran compiler version 4.9 or later. To check if you fulfil that
requisite, type ``gfortran --version`` in the shell, and the first line should look like
using intel's ifort compiler or the GNU gfortran compiler version 4.9 or later. To check if you have the latter,
type ``gfortran --version`` in the shell, and the first line should look like
.. code::
Expand All @@ -57,6 +57,9 @@
Check that ``[gfortran's version]`` is at least 4.9. If you get an error instead, you need
to install gfortran (contact your local IT service).
CAMB comes with binaries pre-built for Windows, so if you don't need to modify the CAMB source code, no Fortran compiler is
needed.
Automatic installation
^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -325,7 +328,7 @@ def set(self, params_values_dict, i_state):
setattr(cambparams, attr, value)
else:
self.log.error("Some of the attributes to be set manually were not "
"recognised: %s=%s", attr, value)
"recognized: %s=%s", attr, value)
raise HandledException
return cambparams
except CAMBParamRangeError:
Expand Down
8 changes: 4 additions & 4 deletions cobaya/theories/classy/classy.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
.. note::
CLASS is renamed ``classy`` for most purposes within cobaya, due to CLASS' name being
CLASS is renamed ``classy`` for most purposes within cobaya, due to CLASS's name being
a python keyword.
Usage
Expand Down Expand Up @@ -59,7 +59,7 @@
If you are planning to modify CLASS or use an already modified version,
you should not use the automatic installation script. Use the method below instead.
CLASS' python interface utilizes the ``cython`` compiler. If typing ``cython`` in the
CLASS's python interface utilizes the ``cython`` compiler. If typing ``cython`` in the
shell produces an error, install it with ``pip install cython --user``.
.. note::
Expand Down Expand Up @@ -93,7 +93,7 @@
If you modify CLASS and add new variables, you don't need to let cobaya know, but make
sure that the variables you create are exposed in the Python
interface (contact CLASS' developers if you need help with that).
interface (contact CLASS's developers if you need help with that).
"""
# Python 2/3 compatibility
Expand Down Expand Up @@ -380,7 +380,7 @@ def get_derived_all(self, derived_requested=True):
[p for p,v in requested_and_extra.items() if v is None]))
# Separate the parameters before returning
# Remember: self.output_params is in sampler nomenclature,
# but self.derived_extra is in CLASS'
# but self.derived_extra is in CLASS
derived = {
p:requested_and_extra[self.translate_param(p)] for p in self.output_params}
derived_extra = {p:requested_and_extra[p] for p in self.derived_extra}
Expand Down
2 changes: 1 addition & 1 deletion docs/cosmo_external_likelihood.rst
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ And now we can e.g. plot a slice of the log likelihood along different :math:`A_
- If the likelihood evaluates to ``-inf`` (but the prior is finite) it probably means that the theory code is failing; to display the error information of the theory code, add to it the ``stop_at_error: True`` option, as shown in the example input above.


Before we start sampling, it is a good idea to characterise the speed of your likelihood, so that the sampler can behave more efficiently. To do that, set ``timing: True`` in the input before initialising your model (as we did above), evaluate the likelihood a couple of times (as we did for the log-likelihood plot above), and *close* the model as ``model.close()``. This will print the evaluation time (in seconds) of the theory code and the likelihoods. Now, redefine the likelihood in the input to add the speed, which is the inverse of the evaluation time in seconds, e.g. if that was :math:`2\,\mathrm{ms}`:
Before we start sampling, it is a good idea to characterize the speed of your likelihood, so that the sampler can behave more efficiently. To do that, set ``timing: True`` in the input before initialising your model (as we did above), evaluate the likelihood a couple of times (as we did for the log-likelihood plot above), and *close* the model as ``model.close()``. This will print the evaluation time (in seconds) of the theory code and the likelihoods. Now, redefine the likelihood in the input to add the speed, which is the inverse of the evaluation time in seconds, e.g. if that was :math:`2\,\mathrm{ms}`:

.. code:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/cosmo_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ You can use it to test your modifications, to evaluate the cosmological observab

Models are created from the same input as the :func:`~run.run` function: a dictionary containing the same blocks, except for the ``sampler`` block (it can be included, but will be ignored).

So let us create a simple one using the input-generator [LINK!!!!!]: Planck 2015 polarised CMB and lensing with CLASS as a theory code. Let us copy the ``python`` version (you can also copy the ``yaml`` version and load it with :func:`~yaml.yaml_load`).
So let us create a simple one using the :doc:`input generator <cosmo_basic_runs>`: Planck 2015 polarized CMB and lensing with CLASS as a theory code. Let us copy the ``python`` version (you can also copy the ``yaml`` version and load it with :func:`~yaml.yaml_load`).

.. literalinclude:: ./src_examples/cosmo_model/1.py
:language: python
Expand All @@ -17,7 +17,7 @@ Now let's build a model (we will need the path to your modules' installation):
.. literalinclude:: ./src_examples/cosmo_model/2.py
:language: python

To get (log)probabilities and derived parameters for particular parameter values, we can use the different methods of the :class:`model.Model` (see below), to which we pass a dictionary of **sampled** parameter values.
To get (log) probabilities and derived parameters for particular parameter values, we can use the different methods of the :class:`model.Model` (see below), to which we pass a dictionary of **sampled** parameter values.

.. note::

Expand Down
4 changes: 2 additions & 2 deletions docs/cosmo_theories_likes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ Models in Cosmology are usually split in two: :math:`\mathcal{M}=\mathcal{T}+\ma
* :math:`\mathcal{T}`, the *theoretical* model, is used to compute observable quantities :math:`\mathcal{O}`
* :math:`\mathcal{E}`, the *experimental* model, accounts for instrumental errors, foregrounds... when comparing the theoretical observable with some data :math:`\mathcal{D}`.

In practice the theoretical model is encapsulated in a **theory code** (:doc:`CLASS <theory_class>`, :doc:`CAMB <theory_camb>`...) and the experimental model in a **likelihood**, which gives the probability of the data being a realisation of the given observable in the context of the experiment:
In practice the theoretical model is encapsulated in a **theory code** (:doc:`CLASS <theory_class>`, :doc:`CAMB <theory_camb>`...) and the experimental model in a **likelihood**, which gives the probability of the data being a realization of the given observable in the context of the experiment:

.. math::
\mathcal{L}\left[\mathcal{D}\,|\,\mathcal{M}\right] =
\mathcal{L}\left[\mathcal{D}\,|\,\mathcal{O},\mathcal{E}\right]
Each iteration of a sampler reproduces that model along the following steps:
Each iteration of a sampler reproduces the model using the following steps:

#. A new set of theory+experimental parameters is proposed by the sampler.
#. The theory parameters are passed to the theory code, which computes one or more observables.
Expand Down
2 changes: 2 additions & 0 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -176,6 +176,8 @@ The recommended way is to get a `GitHub <https://github.com>`_ user and `fork th
$ git clone https://[email protected]/YOUR_USERNAME/cobaya.git
$ pip install --editable cobaya[test] --upgrade
(add the --user option if you don't have write access to the default pip installation location).

Alternatively, you can clone from the official **cobaya** repo (but this way you won't be able to upload your changes!).

.. code:: bash
Expand Down
2 changes: 1 addition & 1 deletion docs/likelihood_gaussian.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``gaussian`` likelihood
=======================

A simple (multi-modal if required) Gaussian likelihood, aimed at testing. The pdf is normalized to 1 when integrated over an infinite domain, regardless of the number of modes.
A simple (multi-modal if required) Gaussian mixture likelihood, aimed at testing. The pdf is normalized to 1 when integrated over an infinite domain, regardless of the number of modes.

Usage
-----
Expand Down
Loading

0 comments on commit 6495e00

Please sign in to comment.