diff --git a/THANKS.txt b/THANKS.txt
index b208054dc..a1e3bb03f 100644
--- a/THANKS.txt
+++ b/THANKS.txt
@@ -1,24 +1,40 @@
-Many people have contributed to lmfit.
-
-Matthew Newville wrote the original version and maintains the project.
-Till Stensitzki wrote the improved estimates of confidence intervals, and
- contributed many tests, bug fixes, and documentation.
-Daniel B. Allan wrote much of the high level Model code, and many
- improvements to the testing and documentation.
-Antonino Ingargiola wrote much of the high level Model code and provided
- many bug fixes.
-J. J. Helmus wrote the MINUT bounds for leastsq, originally in
- leastsqbounds.py, and ported to lmfit.
-E. O. Le Bigot wrote the uncertainties package, a version of which is used
- by lmfit.
-Michal Rawlik added plotting capabilities for Models.
-A. R. J. Nelson added differential_evolution, emcee, and greatly improved the
- code in the docstrings.
-
-Additional patches, bug fixes, and suggestions have come from Christoph
- Deil, Francois Boulogne, Thomas Caswell, Colin Brosseau, nmearl,
- Gustavo Pasquevich, Clemens Prescher, LiCode, and Ben Gamari.
-
-The lmfit code obviously depends on, and owes a very large debt to the code
-in scipy.optimize. Several discussions on the scipy-user and lmfit mailing
-lists have also led to improvements in this code.
+Many people have contributed to lmfit. The attribution of credit in a project such as
+this is very difficult to get perfect, and there are no doubt important contributions
+missing or under-represented here. Please consider this file as part of the documentation
+that may have bugs that need fixing.
+
+Some of the largest and most important contributions (approximately in order of
+contribution in size to the existing code) are from:
+
+ Matthew Newville wrote the original version and maintains the project.
+
+ Till Stensitzki wrote the improved estimates of confidence intervals, and contributed
+ many tests, bug fixes, and documentation.
+
+ A. R. J. Nelson added differential_evolution, emcee, and greatly improved the code,
+ docstrings, and overall project.
+
+ Daniel B. Allan wrote much of the high level Model code, and many improvements to the
+ testing and documentation.
+
+ Antonino Ingargiola wrote much of the high level Model code and has provided many bug
+ fixes and improvements.
+
+ Renee Otten wrote the brute force method, and has improved the code and documentation
+ in many places.
+
+ Michal Rawlik added plotting capabilities for Models.
+
+ J. J. Helmus wrote the MINUT bounds for leastsq, originally in leastsqbounds.py, and
+ ported to lmfit.
+
+ E. O. Le Bigot wrote the uncertainties package, a version of which is used by lmfit.
+
+
+Additional patches, bug fixes, and suggestions have come from Christoph Deil, Francois
+Boulogne, Thomas Caswell, Colin Brosseau, nmearl, Gustavo Pasquevich, Clemens Prescher,
+LiCode, Ben Gamari, Yoav Roam, Alexander Stark, Alexandre Beelen, and many others.
+
+The lmfit code obviously depends on, and owes a very large debt to the code in
+scipy.optimize. Several discussions on the scipy-user and lmfit mailing lists have also
+led to improvements in this code.
diff --git a/doc/_templates/indexsidebar.html b/doc/_templates/indexsidebar.html
index ceb1a92aa..ff3ca9aa7 100644
--- a/doc/_templates/indexsidebar.html
+++ b/doc/_templates/indexsidebar.html
@@ -1,10 +1,8 @@
Getting LMFIT
Current version: {{ release }}
-Download: PyPI (Python.org)
Install: pip install lmfit
-
-
Development version:
- github.com
+
Download: PyPI (Python.org)
+
Develop: github.com
Questions?
@@ -12,13 +10,11 @@ Questions?
Mailing List
Getting Help
-Off-line Documentation
+Static, off-line docs
[PDF
|EPUB
-|HTML(zip)
-]
-
+|HTML(zip)]
diff --git a/doc/builtin_models.rst b/doc/builtin_models.rst
index b3c7d0571..72d299de9 100644
--- a/doc/builtin_models.rst
+++ b/doc/builtin_models.rst
@@ -55,263 +55,82 @@ methods for all of these make a fairly crude guess for the value of
:class:`GaussianModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: GaussianModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Gaussian or normal distribution lineshape
-`_. Parameter names:
-``amplitude``, ``center``, and ``sigma``.
-In addition, parameters ``fwhm`` and ``height`` are included as constraints
-to report full width at half maximum and maximum peak height, respectively.
-
-.. math::
-
- f(x; A, \mu, \sigma) = \frac{A}{\sigma\sqrt{2\pi}} e^{[{-{(x-\mu)^2}/{{2\sigma}^2}}]}
-
-where the parameter ``amplitude`` corresponds to :math:`A`, ``center`` to
-:math:`\mu`, and ``sigma`` to :math:`\sigma`. The full width at
-half maximum is :math:`2\sigma\sqrt{2\ln{2}}`, approximately
-:math:`2.3548\sigma`.
-
+.. autoclass:: GaussianModel
:class:`LorentzianModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: LorentzianModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Lorentzian or Cauchy-Lorentz distribution function
-`_. Parameter names:
-``amplitude``, ``center``, and ``sigma``.
-In addition, parameters ``fwhm`` and ``height`` are included as constraints
-to report full width at half maximum and maximum peak height, respectively.
-
-.. math::
-
- f(x; A, \mu, \sigma) = \frac{A}{\pi} \big[\frac{\sigma}{(x - \mu)^2 + \sigma^2}\big]
-
-where the parameter ``amplitude`` corresponds to :math:`A`, ``center`` to
-:math:`\mu`, and ``sigma`` to :math:`\sigma`. The full width at
-half maximum is :math:`2\sigma`.
+.. autoclass:: LorentzianModel
:class:`VoigtModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: VoigtModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Voigt distribution function
-`_. Parameter names:
-``amplitude``, ``center``, and ``sigma``. A ``gamma`` parameter is also
-available. By default, it is constrained to have value equal to ``sigma``,
-though this can be varied independently. In addition, parameters ``fwhm``
-and ``height`` are included as constraints to report full width at half
-maximum and maximum peak height, respectively. The definition for the
-Voigt function used here is
-
-.. math::
-
- f(x; A, \mu, \sigma, \gamma) = \frac{A \textrm{Re}[w(z)]}{\sigma\sqrt{2 \pi}}
-
-where
-
-.. math::
- :nowrap:
-
- \begin{eqnarray*}
- z &=& \frac{x-\mu +i\gamma}{\sigma\sqrt{2}} \\
- w(z) &=& e^{-z^2}{\operatorname{erfc}}(-iz)
- \end{eqnarray*}
-
-and :func:`erfc` is the complimentary error function. As above,
-``amplitude`` corresponds to :math:`A`, ``center`` to
-:math:`\mu`, and ``sigma`` to :math:`\sigma`. The parameter ``gamma``
-corresponds to :math:`\gamma`.
-If ``gamma`` is kept at the default value (constrained to ``sigma``),
-the full width at half maximum is approximately :math:`3.6013\sigma`.
+.. autoclass:: VoigtModel
:class:`PseudoVoigtModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: PseudoVoigtModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-a model based on a `pseudo-Voigt distribution function
-`_,
-which is a weighted sum of a Gaussian and Lorentzian distribution functions
-with that share values for ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`)
-and full width at half maximum (and so have constrained values of
-``sigma`` (:math:`\sigma`). A parameter ``fraction`` (:math:`\alpha`)
-controls the relative weight of the Gaussian and Lorentzian components,
-giving the full definition of
-
-.. math::
-
- f(x; A, \mu, \sigma, \alpha) = \frac{(1-\alpha)A}{\sigma_g\sqrt{2\pi}} e^{[{-{(x-\mu)^2}/{{2\sigma_g}^2}}]}
- + \frac{\alpha A}{\pi} \big[\frac{\sigma}{(x - \mu)^2 + \sigma^2}\big]
-
-where :math:`\sigma_g = {\sigma}/{\sqrt{2\ln{2}}}` so that the full width
-at half maximum of each component and of the sum is :math:`2\sigma`. The
-:meth:`guess` function always sets the starting value for ``fraction`` at 0.5.
+.. autoclass:: PseudoVoigtModel
:class:`MoffatModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: MoffatModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-a model based on a `Moffat distribution function
-`_, the parameters are
-``amplitude`` (:math:`A`), ``center`` (:math:`\mu`),
-a width parameter ``sigma`` (:math:`\sigma`) and an exponent ``beta`` (:math:`\beta`).
-For (:math:`\beta=1`) the Moffat has a Lorentzian shape.
-
-.. math::
-
- f(x; A, \mu, \sigma, \beta) = A \big[(\frac{x-\mu}{\sigma})^2+1\big]^{-\beta}
-
-the full width have maximum is :math:`2\sigma\sqrt{2^{1/\beta}-1}`.
-:meth:`guess` function always sets the starting value for ``beta`` to 1.
+.. autoclass:: MoffatModel
:class:`Pearson7Model`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: Pearson7Model(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Pearson VII distribution
-`_.
-This is a Lorenztian-like distribution function. It has the usual
-parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
-``sigma`` (:math:`\sigma`), and also an ``exponent`` (:math:`m`) in
-
-.. math::
-
- f(x; A, \mu, \sigma, m) = \frac{A}{\sigma{\beta(m-\frac{1}{2}, \frac{1}{2})}} \bigl[1 + \frac{(x-\mu)^2}{\sigma^2} \bigr]^{-m}
-
-where :math:`\beta` is the beta function (see :scipydoc:`special.beta` in
-:mod:`scipy.special`). The :meth:`guess` function always
-gives a starting value for ``exponent`` of 1.5.
+.. autoclass:: Pearson7Model
:class:`StudentsTModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: StudentsTModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Student's t distribution function
-`_, with the usual
-parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
-``sigma`` (:math:`\sigma`) in
-
-.. math::
-
- f(x; A, \mu, \sigma) = \frac{A \Gamma(\frac{\sigma+1}{2})} {\sqrt{\sigma\pi}\,\Gamma(\frac{\sigma}{2})} \Bigl[1+\frac{(x-\mu)^2}{\sigma}\Bigr]^{-\frac{\sigma+1}{2}}
-
-
-where :math:`\Gamma(x)` is the gamma function.
+.. autoclass:: StudentsTModel
:class:`BreitWignerModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: BreitWignerModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Breit-Wigner-Fano function
-`_. It has the usual
-parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
-``sigma`` (:math:`\sigma`), plus ``q`` (:math:`q`) in
-
-.. math::
-
- f(x; A, \mu, \sigma, q) = \frac{A (q\sigma/2 + x - \mu)^2}{(\sigma/2)^2 + (x - \mu)^2}
+.. autoclass:: BreitWignerModel
:class:`LognormalModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: LognormalModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on the `Log-normal distribution function
-`_.
-It has the usual parameters
-``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and ``sigma``
-(:math:`\sigma`) in
-
-.. math::
-
- f(x; A, \mu, \sigma) = \frac{A e^{-(\ln(x) - \mu)/ 2\sigma^2}}{x}
+.. autoclass:: LognormalModel
:class:`DampedOcsillatorModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: DampedOcsillatorModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on the `Damped Harmonic Oscillator Amplitude
-`_.
-It has the usual parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
-``sigma`` (:math:`\sigma`) in
+.. autoclass:: DampedOscillatorModel
-.. math::
+:class:`DampedHarmonicOcsillatorModel`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- f(x; A, \mu, \sigma) = \frac{A}{\sqrt{ [1 - (x/\mu)^2]^2 + (2\sigma x/\mu)^2}}
+.. autoclass:: DampedHarmonicOscillatorModel
:class:`ExponentialGaussianModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: ExponentialGaussianModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model of an `Exponentially modified Gaussian distribution
-`_.
-It has the usual parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
-``sigma`` (:math:`\sigma`), and also ``gamma`` (:math:`\gamma`) in
-
-.. math::
+.. autoclass:: ExponentialGaussianModel
- f(x; A, \mu, \sigma, \gamma) = \frac{A\gamma}{2}
- \exp\bigl[\gamma({\mu - x + \gamma\sigma^2/2})\bigr]
- {\operatorname{erfc}}\Bigl(\frac{\mu + \gamma\sigma^2 - x}{\sqrt{2}\sigma}\Bigr)
-
-
-where :func:`erfc` is the complimentary error function.
:class:`SkewedGaussianModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: SkewedGaussianModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A variation of the above model, this is a `Skewed normal distribution
-`_.
-It has the usual parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
-``sigma`` (:math:`\sigma`), and also ``gamma`` (:math:`\gamma`) in
-
-.. math::
-
- f(x; A, \mu, \sigma, \gamma) = \frac{A}{\sigma\sqrt{2\pi}}
- e^{[{-{(x-\mu)^2}/{{2\sigma}^2}}]} \Bigl\{ 1 +
- {\operatorname{erf}}\bigl[
- \frac{\gamma(x-\mu)}{\sigma\sqrt{2}}
- \bigr] \Bigr\}
-
-
-where :func:`erf` is the error function.
+.. autoclass:: SkewedGaussianModel
:class:`DonaichModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: DonaichModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model of an `Doniach Sunjic asymmetric lineshape
-`_, used in
-photo-emission. With the usual parameters ``amplitude`` (:math:`A`),
-``center`` (:math:`\mu`) and ``sigma`` (:math:`\sigma`), and also ``gamma``
-(:math:`\gamma`) in
-
-.. math::
-
- f(x; A, \mu, \sigma, \gamma) = A\frac{\cos\bigl[\pi\gamma/2 + (1-\gamma)
- \arctan{(x - \mu)}/\sigma\bigr]} {\bigr[1 + (x-\mu)/\sigma\bigl]^{(1-\gamma)/2}}
-
+.. autoclass:: DonaichModel
Linear and Polynomial Models
------------------------------------
@@ -324,67 +143,22 @@ of many components of composite model.
:class:`ConstantModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: ConstantModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
- A class that consists of a single value, ``c``. This is constant in the
- sense of having no dependence on the independent variable ``x``, not in
- the sense of being non-varying. To be clear, ``c`` will be a variable
- Parameter.
+.. autoclass:: ConstantModel
:class:`LinearModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: LinearModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
- A class that gives a linear model:
-
-.. math::
-
- f(x; m, b) = m x + b
-
-with parameters ``slope`` for :math:`m` and ``intercept`` for :math:`b`.
-
+.. autoclass:: LinearModel
:class:`QuadraticModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: QuadraticModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-
- A class that gives a quadratic model:
-
-.. math::
-
- f(x; a, b, c) = a x^2 + b x + c
-
-with parameters ``a``, ``b``, and ``c``.
-
-
-:class:`ParabolicModel`
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. class:: ParabolicModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
- Same as :class:`QuadraticModel`.
-
+.. autoclass:: QuadraticModel
:class:`PolynomialModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. class:: PolynomialModel(degree, missing=None[, prefix=''[, name=None[, **kws]]])
-
- A class that gives a polynomial model up to ``degree`` (with maximum
- value of 7).
-
-.. math::
-
- f(x; c_0, c_1, \ldots, c_7) = \sum_{i=0, 7} c_i x^i
-
-with parameters ``c0``, ``c1``, ..., ``c7``. The supplied ``degree``
-will specify how many of these are actual variable parameters. This uses
-:numpydoc:`polyval` for its calculation of the polynomial.
-
+.. autoclass:: PolynomialModel
Step-like models
@@ -395,55 +169,13 @@ Two models represent step-like functions, and share many characteristics.
:class:`StepModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: StepModel(form='linear'[, missing=None[, prefix=''[, name=None[, **kws]]]])
-
-A model based on a Step function, with four choices for functional form.
-The step function starts with a value 0, and ends with a value of :math:`A`
-(``amplitude``), rising to :math:`A/2` at :math:`\mu` (``center``),
-with :math:`\sigma` (``sigma``) setting the characteristic width. The
-supported functional forms are ``linear`` (the default), ``atan`` or
-``arctan`` for an arc-tangent function, ``erf`` for an error function, or
-``logistic`` for a `logistic function `_.
-The forms are
-
-.. math::
- :nowrap:
-
- \begin{eqnarray*}
- & f(x; A, \mu, \sigma, {\mathrm{form={}'linear{}'}}) & = A \min{[1, \max{(0, \alpha)}]} \\
- & f(x; A, \mu, \sigma, {\mathrm{form={}'arctan{}'}}) & = A [1/2 + \arctan{(\alpha)}/{\pi}] \\
- & f(x; A, \mu, \sigma, {\mathrm{form={}'erf{}'}}) & = A [1 + {\operatorname{erf}}(\alpha)]/2 \\
- & f(x; A, \mu, \sigma, {\mathrm{form={}'logistic{}'}})& = A [1 - \frac{1}{1 + e^{\alpha}} ]
- \end{eqnarray*}
+.. autoclass:: StepModel
-where :math:`\alpha = (x - \mu)/{\sigma}`.
:class:`RectangleModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. class:: RectangleModel(form='linear'[, missing=None[, prefix=''[, name=None[, **kws]]]])
-
-A model based on a Step-up and Step-down function of the same form. The
-same choices for functional form as for :class:`StepModel` are supported,
-with ``linear`` as the default. The function starts with a value 0, and
-ends with a value of :math:`A` (``amplitude``), rising to :math:`A/2` at
-:math:`\mu_1` (``center1``), with :math:`\sigma_1` (``sigma1``) setting the
-characteristic width. It drops to rising to :math:`A/2` at :math:`\mu_2`
-(``center2``), with characteristic width :math:`\sigma_2` (``sigma2``).
-
-.. math::
- :nowrap:
-
- \begin{eqnarray*}
- &f(x; A, \mu, \sigma, {\mathrm{form={}'linear{}'}}) &= A \{ \min{[1, \max{(0, \alpha_1)}]} + \min{[-1, \max{(0, \alpha_2)}]} \} \\
- &f(x; A, \mu, \sigma, {\mathrm{form={}'arctan{}'}}) &= A [\arctan{(\alpha_1)} + \arctan{(\alpha_2)}]/{\pi} \\
- &f(x; A, \mu, \sigma, {\mathrm{form={}'erf{}'}}) &= A [{\operatorname{erf}}(\alpha_1) + {\operatorname{erf}}(\alpha_2)]/2 \\
- &f(x; A, \mu, \sigma, {\mathrm{form={}'logistic{}'}}) &= A [1 - \frac{1}{1 + e^{\alpha_1}} - \frac{1}{1 + e^{\alpha_2}} ]
- \end{eqnarray*}
-
-
-where :math:`\alpha_1 = (x - \mu_1)/{\sigma_1}` and :math:`\alpha_2 = -(x - \mu_2)/{\sigma_2}`.
+.. autoclass:: RectangleModel
Exponential and Power law models
@@ -452,31 +184,12 @@ Exponential and Power law models
:class:`ExponentialModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: ExponentialModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on an `exponential decay function
-`_. With parameters named
-``amplitude`` (:math:`A`), and ``decay`` (:math:`\tau`), this has the form:
-
-.. math::
-
- f(x; A, \tau) = A e^{-x/\tau}
-
+.. autoclass:: ExponentialModel
:class:`PowerLawModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: PowerLawModel(missing=None[, prefix=''[, name=None[, **kws]]])
-
-A model based on a `Power Law `_.
-With parameters
-named ``amplitude`` (:math:`A`), and ``exponent`` (:math:`k`), this has the
-form:
-
-.. math::
-
- f(x; A, k) = A x^k
-
+.. autoclass:: PowerLawModel
User-defined Models
----------------------------
@@ -501,19 +214,7 @@ mathematical constraints as discussed in :ref:`constraints_chapter`.
:class:`ExpressionModel`
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. class:: ExpressionModel(expr, independent_vars=None, init_script=None, **kws)
-
- A model using the user-supplied mathematical expression, which can be nearly any valid Python expresion.
-
- :param expr: Expression use to build model.
- :type expr: string
- :param independent_vars: List of argument names in expression that are independent variables.
- :type independent_vars: ``None`` (default) or list of strings for independent variables
- :param init_script: Python script to run before parsing and evaluating expression.
- :type init_script: ``None`` (default) or string
-
-with other parameters passed to :class:`model.Model`, with the notable
-exception that :class:`ExpressionModel` does **not** support the `prefix` argument.
+.. autoclass:: ExpressionModel
Since the point of this model is that an arbitrary expression will be
supplied, the determination of what are the parameter names for the model
@@ -544,7 +245,6 @@ To evaluate this model, you might do the following::
>>> params = mod.make_params(off=0.25, amp=1.0, x0=2.0, phase=0.04)
>>> y = mod.eval(params, x=x)
-
While many custom models can be built with a single line expression
(especially since the names of the lineshapes like `gaussian`, `lorentzian`
and so on, as well as many numpy functions, are available), more complex
diff --git a/doc/conf.py b/doc/conf.py
index 71ce44866..e481295bd 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -15,7 +15,8 @@
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-sys.path.append(os.path.abspath(os.path.join('..', 'lmfit')))
+sys.path.insert(0, os.path.abspath('../'))
+# sys.path.append(os.path.abspath(os.path.join('..', 'lmfit')))
sys.path.append(os.path.abspath(os.path.join('.', 'sphinx')))
sys.path.append(os.path.abspath(os.path.join('.')))
# -- General configuration -----------------------------------------------------
@@ -24,19 +25,12 @@
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
from extensions import extensions
-extensions = ['sphinx.ext.extlinks',
+ssoextensions = ['sphinx.ext.extlinks',
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'sphinx.ext.mathjax']
autoclass_content = 'both'
-#
-# try:
-# import IPython.sphinxext.ipython_directive
-# extensions.extend(['IPython.sphinxext.ipython_directive',
-# 'IPython.sphinxext.ipython_console_highlighting'])
-# except ImportError:
-# pass
intersphinx_mapping = {'py': ('http://docs.python.org/2', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
@@ -68,29 +62,9 @@
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-sys.path.insert(0, os.path.abspath('../'))
-try:
- import lmfit
- release = lmfit.__version__
-# The full version, including alpha/beta/rc tags.
-except ImportError:
- release = 'latest'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of documents that shouldn't be included in the build.
-#unused_docs = []
+
+import lmfit
+release = lmfit.__version__
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
diff --git a/doc/extensions.py b/doc/extensions.py
index 40de659bf..1bc8b9f96 100644
--- a/doc/extensions.py
+++ b/doc/extensions.py
@@ -1,10 +1,9 @@
# sphinx extensions for mathjax
+
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.intersphinx',
- 'numpydoc']
-mathjax = 'sphinx.ext.mathjax'
-pngmath = 'sphinx.ext.pngmath'
-
-extensions.append(mathjax)
+ 'sphinx.ext.extlinks',
+ 'sphinx.ext.napoleon',
+ 'sphinx.ext.mathjax']
diff --git a/doc/fitting.rst b/doc/fitting.rst
index 442a4be3d..7db5c3156 100644
--- a/doc/fitting.rst
+++ b/doc/fitting.rst
@@ -2,6 +2,7 @@
.. module:: lmfit.minimizer
+
=======================================
Performing Fits, Analyzing Outputs
=======================================
@@ -23,7 +24,6 @@ details on writing the objective.
.. autofunction:: minimize
-
.. _fit-func-label:
Writing a Fitting Function
@@ -59,7 +59,6 @@ model. For the other methods, the return value can either be a scalar or an arr
array is returned, the sum of squares of the array will be sent to the underlying fitting
method, effectively doing a least-squares optimization of the return values.
-
Since the function will be passed in a dictionary of :class:`Parameters`, it is advisable
to unpack these to get numerical values at the top of the function. A
simple way to do this is with :meth:`Parameters.valuesdict`, as shown below::
@@ -297,7 +296,7 @@ These are calculated as:
\begin{eqnarray*}
{\rm aic} &=& N \ln(\chi^2/N) + 2 N_{\rm varys} \\
- {\rm bic} &=& N \ln(\chi^2/N) + \ln(N) *N_{\rm varys} \\
+ {\rm bic} &=& N \ln(\chi^2/N) + \ln(N) N_{\rm varys} \\
\end{eqnarray*}
@@ -500,24 +499,9 @@ Getting and Printing Fit Reports
.. currentmodule:: lmfit.printfuncs
-.. function:: fit_report(result, modelpars=None, show_correl=True, min_correl=0.1)
-
- Generate and return text of report of best-fit values, uncertainties,
- and correlations from fit.
-
- :param result: :class:`MinimizerResult` object as returned by :func:`minimize`.
- :param modelpars: Parameters with "Known Values" (optional, default None)
- :param show_correl: Whether to show list of sorted correlations [``True``]
- :param min_correl: Smallest correlation absolute value to show [0.1]
-
- If the first argument is a :class:`Parameters` object,
- goodness-of-fit statistics will not be included.
-
-.. function:: report_fit(result, modelpars=None, show_correl=True, min_correl=0.1)
-
- Print text of report from :func:`fit_report`.
+.. autofunction:: fit_report
-An example fit with report would be
+An example using this to write out a fit report would be
.. literalinclude:: ../examples/doc_withreport.py
diff --git a/doc/installation.rst b/doc/installation.rst
index 05b6f6ce9..b9afbc7bb 100644
--- a/doc/installation.rst
+++ b/doc/installation.rst
@@ -3,24 +3,28 @@ Downloading and Installation
====================================
.. _lmfit github repository: http://github.com/lmfit/lmfit-py
-.. _Python Setup Tools: http://pypi.python.org/pypi/setuptools
-.. _pip: https://pip.pypa.io/
-.. _nose: http://nose.readthedocs.org/
+.. _nose: http://nose.readthedocs.org/
+.. _pytest: http://pytest.org/
+.. _emcee: http://dan.iel.fm/emcee/
+.. _pandas: http://pandas.pydata.org/
+.. _jupyter: http://jupyter.org/
+.. _matplotlib: http://matplotlib.org/
Prerequisites
~~~~~~~~~~~~~~~
The lmfit package requires Python, Numpy, and Scipy.
-Lmfit works with Python 2.7, 3.3, 3.4, and 3.5. Support for Python 2.6
+Lmfit works with Python 2.7, 3.3, 3.4, 3.5, and 3.6. Support for Python 2.6
ended with lmfit version 0.9.4. Scipy version 0.15 or higher is required,
with 0.17 or higher recommended to be able to use the latest optimization
features from scipy. Numpy version 1.5 or higher is required.
-In order to run the test suite, the `nose`_ framework is required. Some
-parts of lmfit will be able to make use of IPython (version 4 or higher),
-matplotlib, and pandas if those libraries are installed, but no core
-functionality of lmfit requires these.
+In order to run the test suite, either the `nose`_ or `pytest`_ package is
+required. Some functionality of lmfit requires the `emcee`_ package, some
+functionality will make use of the `pandas`_, `Jupyter`_ or `matplotlib`_
+packages if these are available. We highly recommend each of these
+packages.
Downloads
@@ -32,11 +36,11 @@ The latest stable version of lmfit is |release| is available from `PyPi
Installation
~~~~~~~~~~~~~~~~~
-If you have `pip`_ installed, you can install lmfit with::
+With pip now widely avaliable, you can install lmfit with::
pip install lmfit
-or you can download the source kit, unpack it and install with::
+Alternatively, you can download the source kit, unpack it and install with::
python setup.py install
@@ -44,8 +48,6 @@ For Anaconda Python, lmfit is not an official packages, but several
Anaconda channels provide it, allowing installation with (for example)::
conda install -c conda-forge lmfit
- conda install -c newville lmfit
-
Development Version
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -62,14 +64,16 @@ and install using::
Testing
~~~~~~~~~~
-A battery of tests scripts that can be run with the `nose`_ testing
-framework is distributed with lmfit in the ``tests`` folder. These are
-routinely run on the development version. Running ``nosetests`` should run
-all of these tests to completion without errors or failures.
+A battery of tests scripts that can be run with either the `nose`_ or
+`pytest`_ testing framework is distributed with lmfit in the ``tests``
+folder. These are automatically run as part of the development process.
+For any release or any master branch from the git repository, running
+``pytest`` or ``nosetests`` should run all of these tests to completion
+without errors or failures.
Many of the examples in this documentation are distributed with lmfit in
-the ``examples`` folder, and should also run for you. Many of these require
-
+the ``examples`` folder, and should also run for you. Some of these
+examples assume `matplotlib`_ has been installed and is working correctly.
Acknowledgements
~~~~~~~~~~~~~~~~~~
diff --git a/doc/model.rst b/doc/model.rst
index 34d4ea471..43e78b95d 100644
--- a/doc/model.rst
+++ b/doc/model.rst
@@ -8,20 +8,24 @@ Modeling Data and Curve Fitting
A common use of least-squares minimization is *curve fitting*, where one
has a parametrized model function meant to explain some phenomena and wants
-to adjust the numerical values for the model to most closely match some
-data. With :mod:`scipy`, such problems are commonly solved with
-:scipydoc:`optimize.curve_fit`, which is a wrapper around
-:scipydoc:`optimize.leastsq`. Since lmfit's :func:`~lmfit.minimizer.minimize` is also
-a high-level wrapper around :scipydoc:`optimize.leastsq` it can be used
-for curve-fitting problems, but requires more effort than using
-:scipydoc:`optimize.curve_fit`.
-
-
-Here we discuss lmfit's :class:`Model` class. This takes a model function
--- a function that calculates a model for some data -- and provides methods
-to create parameters for that model and to fit data using that model
-function. This is closer in spirit to :scipydoc:`optimize.curve_fit`,
-but with the advantages of using :class:`~lmfit.parameter.Parameters` and lmfit.
+to adjust the numerical values for the model so that it most closely
+matches some data. With :mod:`scipy`, such problems are typically solved
+with :scipydoc:`optimize.curve_fit`, which is a wrapper around
+:scipydoc:`optimize.leastsq`. Since lmfit's
+:func:`~lmfit.minimizer.minimize` is also a high-level wrapper around
+:scipydoc:`optimize.leastsq` it can be used for curve-fitting problems.
+While it offers many benefits over :scipydoc:`optimize.leastsq`, using
+:func:`~lmfit.minimizer.minimize` for many curve-fitting problems still
+requires more effort than using :scipydoc:`optimize.curve_fit`.
+
+The :class:`Model` class in lmfit provides a simple and flexible approach
+to curve-fitting problems. Like scipydoc:`optimize.curve_fit`, a
+:class:`Model` uses a *model function* -- a function that is meant to
+calculate a model for some phenomenon -- and then` uses that to best match
+an array of supplied data. Beyond that similarity, its interface is rather
+different from scipydoc:`optimize.curve_fit`, for example in that it uses
+:class:`~lmfit.parameter.Parameters`, but also offers several other
+important advantages.
In addition to allowing you turn any model function into a curve-fitting
method, lmfit also provides canonical definitions for many known line shapes
@@ -34,13 +38,13 @@ turning python function into high-level fitting models with the
:class:`Model` class, and using these to fit data.
-Example: Fit data to Gaussian profile
-================================================
+Motivation and simple example: Fit data to Gaussian profile
+=============================================================
Let's start with a simple and common example of fitting data to a Gaussian
peak. As we will see, there is a buit-in :class:`GaussianModel` class that
-provides a model function for a Gaussian profile, but here we'll build our
-own. We start with a simple definition of the model function:
+can help do this, but here we'll build our own. We start with a simple
+definition of the model function:
>>> from numpy import sqrt, pi, exp, linspace
>>>
@@ -48,12 +52,12 @@ own. We start with a simple definition of the model function:
... return amp * exp(-(x-cen)**2 /wid)
...
-We want to fit this objective function to data :math:`y(x)` represented by the
-arrays ``y`` and ``x``. This can be done easily with :scipydoc:`optimize.curve_fit`::
+We want to use this function to fit to data :math:`y(x)` represented by the
+arrays ``y`` and ``x``. With :scipydoc:`optimize.curve_fit`, this would be::
>>> from scipy.optimize import curve_fit
>>>
- >>> x = linspace(-10,10)
+ >>> x = linspace(-10,10, 101)
>>> y = y = gaussian(x, 2.33, 0.21, 1.51) + np.random.normal(0, 0.2, len(x))
>>>
>>> init_vals = [1, 0, 1] # for [amp, cen, wid]
@@ -61,46 +65,37 @@ arrays ``y`` and ``x``. This can be done easily with :scipydoc:`optimize.curve_
>>> print best_vals
-We sample random data point, make an initial guess of the model
-values, and run :scipydoc:`optimize.curve_fit` with the model function,
-data arrays, and initial guesses. The results returned are the optimal
-values for the parameters and the covariance matrix. It's simple and very
-useful. But it misses the benefits of lmfit.
-
-
-To solve this with lmfit we would have to write an objective function. But
-such a function would be fairly simple (essentially, ``data - model``,
-possibly with some weighting), and we would need to define and use
-appropriately named parameters. Though convenient, it is somewhat of a
-burden to keep the named parameter straight (on the other hand, with
-:scipydoc:`optimize.curve_fit` you are required to remember the parameter
-order). After doing this a few times it appears as a recurring pattern,
-and we can imagine automating this process. That's where the
-:class:`Model` class comes in.
+That is, we create data, make an initial guess of the model values, and run
+:scipydoc:`optimize.curve_fit` with the model function, data arrays, and
+initial guesses. The results returned are the optimal values for the
+parameters and the covariance matrix. It's simple and useful, but it
+misses the benefits of lmfit.
-:class:`Model` allows us to easily wrap a model function such as the
-``gaussian`` function. This automatically generate the appropriate
-residual function, and determines the corresponding parameter names from
-the function signature itself::
+With lmfit, we create a :class:`Model` that wraps the ``gaussian`` model
+function, which automatically generate the appropriate residual function,
+and determines the corresponding parameter names from the function
+signature itself::
>>> from lmfit import Model
- >>> gmod = Model(gaussian)
- >>> gmod.param_names
+ >>> gmodel = Model(gaussian)
+ >>> gmodel.param_names
set(['amp', 'wid', 'cen'])
- >>> gmod.independent_vars)
+ >>> gmodel.independent_vars)
['x']
-The Model ``gmod`` knows the names of the parameters and the independent
-variables. By default, the first argument of the function is taken as the
-independent variable, held in :attr:`independent_vars`, and the rest of the
-functions positional arguments (and, in certain cases, keyword arguments --
-see below) are used for Parameter names. Thus, for the ``gaussian``
-function above, the parameters are named ``amp``, ``cen``, and ``wid``, and
-``x`` is the independent variable -- all taken directly from the signature
-of the model function. As we will see below, you can specify what the
-independent variable is, and you can add or alter parameters, too.
-
-The parameters are *not* created when the model is created. The model knows
+As you can see, the Model ``gmodel`` determined the names of the parameters
+and the independent variables. By default, the first argument of the
+function is taken as the independent variable, held in
+:attr:`independent_vars`, and the rest of the functions positional
+arguments (and, in certain cases, keyword arguments -- see below) are used
+for Parameter names. Thus, for the ``gaussian`` function above, the
+independent variable is ``x``, and the parameters are named ``amp``,
+``cen``, and ``wid``, and -- all taken directly from the signature of the
+model function. As we will see below, you can modify these default
+determination of what is the independent variable is and which function
+arguments should be identified as parameter names.
+
+The Parameters are *not* created when the model is created. The model knows
what the parameters should be named, but not anything about the scale and
range of your data. You will normally have to make these parameters and
assign initial values and other attributes. To help you do this, each
@@ -109,11 +104,10 @@ the expected names:
>>> params = gmod.make_params()
-This creates the :class:`~lmfit.parameter.Parameters` but doesn't necessarily give them
-initial values -- again, the model has no idea what the scale should be.
-You can set initial values for parameters with keyword arguments to
-:meth:`make_params`:
-
+This creates the :class:`~lmfit.parameter.Parameters` but does not
+automaticaly give them initial values since it has no idea what the scale
+should be. You can set initial values for parameters with keyword
+arguments to :meth:`make_params`:
>>> params = gmod.make_params(cen=5, amp=200, wid=1)
@@ -128,15 +122,24 @@ For example, one could use :meth:`eval` to calculate the predicted
function::
>>> x = linspace(0, 10, 201)
- >>> y = gmod.eval(x=x, amp=10, cen=6.2, wid=0.75)
+ >>> y = gmod.eval(params, x=x)
+
+or with::
+
+ >>> y = gmod.eval(x=x, cen=6.5, amp=100, wid=2.0)
Admittedly, this a slightly long-winded way to calculate a Gaussian
-function. But now that the model is set up, we can also use its
+function, given that you could have called your ``gaussian`` function
+directly. But now that the model is set up, we can use its
:meth:`fit` method to fit this model to data, as with::
- >>> result = gmod.fit(y, x=x, amp=5, cen=5, wid=1)
+ >>> result = gmod.fit(y, params)
-Putting everything together, the script to do such a fit (included in the
+or with::
+
+ >>> result = gmod.fit(y, cen=6.5, amp=100, wid=2.0)
+
+Putting everything together, (included in the
``examples`` folder with the source code) is:
.. literalinclude:: ../examples/doc_model1.py
@@ -146,9 +149,9 @@ a :class:`ModelResult` object. As we will see below, this has many
components, including a :meth:`fit_report` method, which will show::
[[Model]]
- gaussian
+ Model(gaussian)
[[Fit Statistics]]
- # function evals = 33
+ # function evals = 31
# data points = 101
# variables = 3
chi-square = 3.409
@@ -156,16 +159,17 @@ components, including a :meth:`fit_report` method, which will show::
Akaike info crit = -336.264
Bayesian info crit = -328.418
[[Variables]]
- amp: 8.88021829 +/- 0.113594 (1.28%) (init= 5)
- cen: 5.65866102 +/- 0.010304 (0.18%) (init= 5)
- wid: 0.69765468 +/- 0.010304 (1.48%) (init= 1)
+ amp: 5.07800631 +/- 0.064957 (1.28%) (init= 5)
+ cen: 5.65866112 +/- 0.010304 (0.18%) (init= 5)
+ wid: 0.97344373 +/- 0.028756 (2.95%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
- C(amp, wid) = 0.577
+ C(amp, wid) = -0.577
-The result will also have :attr:`init_fit` for the fit with the initial
-parameter values and a :attr:`best_fit` for the fit with the best fit
-parameter values. These can be used to generate the following plot:
+As the script shows, the result will also have :attr:`init_fit` for the fit
+with the initial parameter values and a :attr:`best_fit` for the fit with
+the best fit parameter values. These can be used to generate the following
+plot:
.. image:: _images/model_fit1.png
:target: _images/model_fit1.png
@@ -174,21 +178,18 @@ parameter values. These can be used to generate the following plot:
which shows the data in blue dots, the best fit as a solid red line, and
the initial fit as a dashed black line.
-Note that the model fitting was really performed with 2 lines of code::
+Note that the model fitting was really performed with::
- gmod = Model(gaussian)
- result = gmod.fit(y, x=x, amp=5, cen=5, wid=1)
+ gmodel = Model(gaussian)
+ result = gmodel.fit(y, params, x=x, amp=5, cen=5, wid=1)
These lines clearly express that we want to turn the ``gaussian`` function
into a fitting model, and then fit the :math:`y(x)` data to this model,
-starting with values of 5 for ``amp``, 5 for ``cen`` and 1 for ``wid``.
-This is much more expressive than :scipydoc:`optimize.curve_fit`::
-
- best_vals, covar = curve_fit(gaussian, x, y, p0=[5, 5, 1])
-
-In addition, all the other features of lmfit are included:
-:class:`~lmfit.parameter.Parameters` can have bounds and constraints and the result is a
-rich object that can be reused to explore the model fit in detail.
+starting with values of 5 for ``amp``, 5 for ``cen`` and 1 for ``wid``. In
+addition, all the other features of lmfit are included:
+:class:`~lmfit.parameter.Parameters` can have bounds and constraints and
+the result is a rich object that can be reused to explore the model fit in
+detail.
The :class:`Model` class
@@ -197,137 +198,27 @@ The :class:`Model` class
The :class:`Model` class provides a general way to wrap a pre-defined
function as a fitting model.
-.. class:: Model(func[, independent_vars=None[, param_names=None[, missing=None[, prefix=''[, name=None[, **kws]]]]]])
-
- Create a model based on the user-supplied function. This uses
- introspection to automatically converting argument names of the
- function to Parameter names.
-
- :param func: Model function to be wrapped.
- :type func: callable
- :param independent_vars: List of argument names to ``func`` that are independent variables.
- :type independent_vars: ``None`` (default) or list of strings.
- :param param_names: List of argument names to ``func`` that should be made into Parameters.
- :type param_names: ``None`` (default) or list of strings
- :param missing: How to handle missing values.
- :type missing: one of ``None`` (default), 'none', 'drop', or 'raise'.
- :param prefix: Prefix to add to all parameter names to distinguish components in a :class:`CompositeModel`.
- :type prefix: string
- :param name: Name for the model. When ``None`` (default) the name is the same as the model function (``func``).
- :type name: ``None`` or string.
- :param kws: Additional keyword arguments to pass to model function.
-
-
-Of course, the model function will have to return an array that will be the
-same size as the data being modeled. Generally this is handled by also
-specifying one or more independent variables.
+.. autoclass:: Model
:class:`Model` class Methods
---------------------------------
-.. method:: Model.eval(params=None[, **kws])
-
- Evaluate the model function for a set of parameters and inputs.
-
- :param params: Parameters to use for fit.
- :type params: ``None`` (default) or Parameters
- :param kws: Additional keyword arguments to pass to model function.
- :return: ndarray for model given the parameters and other arguments.
-
- If ``params`` is ``None``, the values for all parameters are expected to
- be provided as keyword arguments. If ``params`` is given, and a keyword
- argument for a parameter value is also given, the keyword argument will
- be used.
-
- Note that all non-parameter arguments for the model function --
- **including all the independent variables!** -- will need to be passed
- in using keyword arguments.
-
-
-.. method:: Model.fit(data[, params=None[, weights=None[, method='leastsq'[, scale_covar=True[, iter_cb=None[, **kws]]]]]])
-
- Perform a fit of the model to the ``data`` array with a set of
- parameters.
-
- :param data: Array of data to be fitted.
- :type data: ndarray-like
- :param params: Parameters to use for fit.
- :type params: ``None`` (default) or Parameters
- :param weights: Weights to use for residual calculation in fit.
- :type weights: ``None`` (default) or ndarray-like.
- :param method: Name of fitting method to use. See :ref:`fit-methods-label` for details.
- :type method: string (default ``leastsq``)
- :param scale_covar: Whether to automatically scale covariance matrix (``leastsq`` only).
- :type scale_covar: bool (default ``True``)
- :param iter_cb: Function to be called at each fit iteration. See :ref:`fit-itercb-label` for details.
- :type iter_cb: callable or ``None``
- :param verbose: Print a message when a new parameter is created due to a *hint*.
- :type verbose: bool (default ``True``)
- :param kws: Additional keyword arguments to pass to model function.
- :return: :class:`ModelResult` object.
-
- If ``params`` is ``None``, the internal ``params`` will be used. If it
- is supplied, these will replace the internal ones. If supplied,
- ``weights`` will be used to weight the calculated residual so that the
- quantity minimized in the least-squares sense is ``weights*(data -
- fit)``. ``weights`` must be an ndarray-like object of same size and
- shape as ``data``.
-
- Note that other arguments for the model function (including all the
- independent variables!) will need to be passed in using keyword
- arguments.
+.. automethod:: Model.eval
+.. automethod:: Model.fit
-.. method:: Model.guess(data, **kws)
+.. automethod:: Model.guess
- Guess starting values for model parameters.
+.. automethod:: Model.make_params
- :param data: Data array used to guess parameter values.
- :type func: ndarray
- :param kws: Additional options to pass to model function.
- :return: :class:`lmfit.parameter.Parameters` with guessed initial values for each parameter.
- by default this is left to raise a ``NotImplementedError``, but may be
- overwritten by subclasses. Generally, this method should take some
- values for ``data`` and use it to construct reasonable starting values for
- the parameters.
-
-
-.. method:: Model.make_params(**kws)
-
- Create a set of parameters for model.
-
- :param kws: Optional keyword/value pairs to set initial values for parameters.
- :return: :class:`lmfit.parameter.Parameters`.
-
- The parameters may or may not have decent initial values for each
- parameter.
-
-
-.. method:: Model.set_param_hint(name, value=None[, min=None[, max=None[, vary=True[, expr=None]]]])
-
- Set *hints* to use when creating parameters with :meth:`Model.make_param` for
- the named parameter. This is especially convenient for setting initial
- values. The ``name`` can include the models ``prefix`` or not.
-
- :param name: Parameter name.
- :type name: string
- :param value: Value for parameter.
- :type value: float
- :param min: Lower bound for parameter value.
- :type min: ``-np.inf`` or float
- :param max: Upper bound for parameter value.
- :type max: ``np.inf`` or float
- :param vary: Whether to vary parameter in fit.
- :type vary: boolean
- :param expr: Mathematical expression for constraint.
- :type expr: string
+.. automethod:: Model.set_param_hint
See :ref:`model_param_hints_section`.
-.. automethod:: lmfit.model.Model.print_param_hints
+.. automethod:: Model.print_param_hints
:class:`Model` class Attributes
@@ -508,7 +399,7 @@ the model will know to map these to the ``amplitude`` argument of ``myfunc``.
Initializing model parameters
------------------------------------------
+--------------------------------
As mentioned above, the parameters created by :meth:`Model.make_params` are
generally created with invalid initial values of ``None``. These values
@@ -595,11 +486,12 @@ can set parameter hints but then change the initial value explicitly with
Using parameter hints
--------------------------------
-
After a model has been created, you can give it hints for how to create
parameters with :meth:`Model.make_params`. This allows you to set not only a
default initial value but also to set other parameter attributes
controlling bounds, whether it is varied in the fit, or a constraint
+
+
expression. To set a parameter hint, you can use :meth:`Model.set_param_hint`,
as with::
@@ -627,7 +519,6 @@ at half maximum of a Gaussian model, one could use a parameter hint of::
>>> mod.set_param_hint('fwhm', expr='2.3548*sigma')
-
The :class:`ModelResult` class
=======================================
@@ -648,204 +539,38 @@ more useful) object that represents a fit with a set of parameters to data
with a model.
-A :class:`ModelResult` has several attributes holding values for fit results,
-and several methods for working with fits. These include statistics
-inherited from :class:`~lmfit.minimizer.Minimizer` useful for comparing different models,
-including `chisqr`, `redchi`, `aic`, and `bic`.
-
-.. class:: ModelResult()
-
- Model fit is intended to be created and returned by :meth:`Model.fit`.
+A :class:`ModelResult` has several attributes holding values for fit
+results, and several methods for working with fits. These include
+statistics inherited from :class:`~lmfit.minimizer.Minimizer` useful for
+comparing different models, including `chisqr`, `redchi`, `aic`, and `bic`.
+.. autoclass:: ModelResult
:class:`ModelResult` methods
---------------------------------
-These methods are all inherited from :class:`~lmfit.minimizer.Minimize` or from
-:class:`Model`.
+.. automethod:: ModelResult.eval
-.. method:: ModelResult.eval(params=None, **kwargs)
- Evaluate the model using parameters supplied (or the best-fit parameters
- if not specified) and supplied independent variables. The ``**kwargs``
- arguments can be used to update parameter values and/or independent
- variables.
+.. automethod:: ModelResult.eval_components
+.. automethod:: ModelResult.fit
-.. method:: ModelResult.eval_components(**kwargs)
- Evaluate each component of a :class:`CompositeModel`, returning an
- ordered dictionary of with the values for each component model. The
- returned dictionary will have keys of the model prefix or (if no prefix
- is given), the model name. The ``**kwargs`` arguments can be used to
- update parameter values and/or independent variables.
+.. automethod:: ModelResult.fit_report
-.. method:: ModelResult.fit(data=None[, params=None[, weights=None[, method=None[, **kwargs]]]])
+.. automethod:: ModelResult.conf_interval
- Fit (or re-fit), optionally changing ``data``, ``params``, ``weights``,
- or ``method``, or changing the independent variable(s) with the
- ``**kwargs`` argument. See :meth:`Model.fit` for argument
- descriptions, and note that any value of ``None`` defaults to the last
- used value.
+.. automethod:: ModelResult.ci_report
-.. method:: ModelResult.fit_report(modelpars=None[, show_correl=True[, min_correl=0.1]])
+.. automethod:: ModelResult.eval_uncertainty
- Return a printable fit report for the fit with fit statistics, best-fit
- values with uncertainties and correlations. As with :func:`fit_report`.
-
- :param modelpars: Parameters with "Known Values" (optional, default None).
- :param show_correl: Whether to show list of sorted correlations [``True``].
- :param min_correl: Smallest correlation absolute value to show [0.1].
-
-
-.. method:: ModelResult.conf_interval(**kwargs)
-
- Calculate the confidence intervals for the variable parameters using
- :func:`confidence.conf_interval() `. All keyword
- arguments are passed to that function. The result is stored in
- :attr:`ci_out`, and so can be accessed without recalculating them.
-
-.. method:: ModelResult.ci_report(with_offset=True)
-
- Return a nicely formatted text report of the confidence intervals, as
- from :func:`ci_report() `.
-
-.. method:: ModelResult.eval_uncertainty(**kwargs)
-
- Evaluate the uncertainty of the *model function* from the
- uncertainties for the best-fit parameters. This can be used
- to give confidence bands for the model.
-
- :param params: Parameters, defaults to :attr:`params`.
- :param sigma: Confidence level, i.e. how many :math:`\sigma` values, default=1.
- :returns: ndarray to be added/subtracted to best-fit to give uncertaintay in the values for the model.
-
- An example using this method::
-
- out = model.fit(data, params, x=x)
- dely = out.eval_uncertainty(x=x)
- plt.plot(x, data)
- plt.plot(x, out.best_fit)
- plt.fill_between(x, out.best_fit-dely, out.best_fit+dely, color='#888888')
-
- This calculation is based on the excellent and clear example from
- https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-prediction-intervals
- which references the original work of
- J. Wolberg,Data Analysis Using the Method of Least Squares, 2006, Springer
-
-
-.. method:: ModelResult.plot(datafmt='o', fitfmt='-', initfmt='--', yerr=None, numpoints=None, fig=None, data_kws=None, fit_kws=None, init_kws=None, ax_res_kws=None, ax_fit_kws=None, fig_kws=None)
-
- Plot the fit results and residuals using Matplotlib, if available. The
- plot will include two panels, one showing the fit residual, and the
- other with the data points, the initial fit curve, and the best-fit
- curve. If the fit model included weights or if ``yerr`` is specified,
- errorbars will also be plotted.
-
- :param datafmt: Matplotlib format string for data curve.
- :type datafmt: ``None`` or string
- :param fitfmt: Matplotlib format string for best-fit curve.
- :type fitfmt: ``None`` or string
- :param initfmt: Matplotlib format string for initial curve.
- :type intfmt: ``None`` or string
- :param yerr: Array of uncertainties for data array.
- :type yerr: ``None`` or ndarray
- :param numpoints: Number of points to display
- :type numpoints: ``None`` or integer
- :param fig: Matplotlib Figure to plot on.
- :type fig: ``None`` or matplotlib.figure.Figure
- :param data_kws: Keyword arguments passed to plot for data curve.
- :type data_kws: ``None`` or dictionary
- :param fit_kws: Keyword arguments passed to plot for best-fit curve.
- :type fit_kws: ``None`` or dictionary
- :param init_kws: Keyword arguments passed to plot for initial curve.
- :type init_kws: ``None`` or dictionary
- :param ax_res_kws: Keyword arguments passed to creation of Matplotlib axes for the residual plot.
- :type ax_res_kws: ``None`` or dictionary
- :param ax_fit_kws: Keyword arguments passed to creation of Matplotlib axes for the fit plot.
- :type ax_fit_kws: ``None`` or dictionary
- :param fig_kws: Keyword arguments passed to creation of Matplotlib figure.
- :type fig_kws: ``None`` or dictionary
- :returns: matplotlib.figure.Figure
-
- This combines :meth:`ModelResult.plot_fit` and :meth:`ModelResult.plot_residual`.
-
- If ``yerr`` is specified or if the fit model included weights, then
- matplotlib.axes.Axes.errorbar is used to plot the data. If ``yerr`` is
- not specified and the fit includes weights, ``yerr`` set to ``1/self.weights``
-
- If ``fig`` is None then ``matplotlib.pyplot.figure(**fig_kws)`` is called.
-
-.. method:: ModelResult.plot_fit(ax=None, datafmt='o', fitfmt='-', initfmt='--', yerr=None, numpoints=None, data_kws=None, fit_kws=None, init_kws=None, ax_kws=None)
-
- Plot the fit results using matplotlib, if available. The plot will include
- the data points, the initial fit curve, and the best-fit curve. If the fit
- model included weights or if ``yerr`` is specified, errorbars will also
- be plotted.
-
- :param ax: Matplotlib axes to plot on.
- :type ax: ``None`` or matplotlib.axes.Axes
- :param datafmt: Matplotlib format string for data curve.
- :type datafmt: ``None`` or string
- :param fitfmt: Matplotlib format string for best-fit curve.
- :type fitfmt: ``None`` or string
- :param initfmt: Matplotlib format string for initial curve.
- :type intfmt: ``None`` or string
- :param yerr: Array of uncertainties for data array.
- :type yerr: ``None`` or ndarray
- :param numpoints: Number of points to display.
- :type numpoints: ``None`` or integer
- :param data_kws: Keyword arguments passed to plot for data curve.
- :type data_kws: ``None`` or dictionary
- :param fit_kws: Keyword arguments passed to plot for best-fit curve.
- :type fit_kws: ``None`` or dictionary
- :param init_kws: Keyword arguments passed to plot for initial curve.
- :type init_kws: ``None`` or dictionary
- :param ax_kws: Keyword arguments passed to creation of matplotlib axes.
- :type ax_kws: ``None`` or dictionary
- :returns: matplotlib.axes.Axes
-
- For details about plot format strings and keyword arguments see
- documentation of :func:`matplotlib.axes.Axes.plot`.
-
- If ``yerr`` is specified or if the fit model included weights, then
- matplotlib.axes.Axes.errorbar is used to plot the data. If ``yerr`` is
- not specified and the fit includes weights, ``yerr`` set to ``1/self.weights``
-
- If ``ax`` is None then ``matplotlib.pyplot.gca(**ax_kws)`` is called.
-
-.. method:: ModelResult.plot_residuals(ax=None, datafmt='o', yerr=None, data_kws=None, fit_kws=None, ax_kws=None)
-
- Plot the fit residuals (data - fit) using matplotlib. If ``yerr`` is
- supplied or if the model included weights, errorbars will also be plotted.
-
- :param ax: Matplotlib axes to plot on.
- :type ax: ``None`` or matplotlib.axes.Axes
- :param datafmt: Matplotlib format string for data curve.
- :type datafmt: ``None`` or string
- :param yerr: Array of uncertainties for data array.
- :type yerr: ``None`` or ndarray
- :param numpoints: Number of points to display
- :type numpoints: ``None`` or integer
- :param data_kws: Keyword arguments passed to plot for data curve.
- :type data_kws: ``None`` or dictionary
- :param fit_kws: Keyword arguments passed to plot for best-fit curve.
- :type fit_kws: ``None`` or dictionary
- :param ax_kws: Keyword arguments passed to creation of matplotlib axes.
- :type ax_kws: ``None`` or dictionary
- :returns: matplotlib.axes.Axes
-
- For details about plot format strings and keyword arguments see
- documentation of :func:`matplotlib.axes.Axes.plot`.
-
- If ``yerr`` is specified or if the fit model included weights, then
- matplotlib.axes.Axes.errorbar is used to plot the data. If ``yerr`` is
- not specified and the fit includes weights, ``yerr`` set to ``1/self.weights``
-
- If ``ax`` is None then ``matplotlib.pyplot.gca(**ax_kws)`` is called.
+.. automethod:: ModelResult.plot
+.. automethod:: ModelResult.plot_fit
+.. automethod:: ModelResult.plot_residuals
:class:`ModelResult` attributes
@@ -977,6 +702,34 @@ These methods are all inherited from :class:`~lmfit.minimizer.Minimize` or from
array, so that ``weights*(data - fit)`` is minimized in the
least-squares sense.
+
+Calculating uncertainties in the model function
+-------------------------------------------------
+
+We return to the first example above and ask not only for the
+uncertainties in the fitted parameters but for the range of values that
+those uncertainties mean for the model function itself. We can use the
+:meth:`ModelResult.eval_uncertainty` method of the model result object to
+evaluate the uncertainty in the model with a specified level for
+:math:`sigma`.
+
+That is, adding::
+
+ dely = result.eval_uncertainty(sigma=3)
+ plt.fill_between(x, result.best_fit-dely, result.best_fit+dely, color="#ABABAB")
+
+to the example fit to the Gaussian at the beginning of this chapter will
+give :math:`3-sigma` bands for the best-fit Gaussian, and produce the
+figure below.
+
+.. _figModel4:
+
+ .. image:: _images/model_fit4.png
+ :target: _images/model_fit4.png
+ :width: 50%
+
+
+
.. index:: Composite models
.. _composite_models_section:
@@ -1009,8 +762,11 @@ and use that with::
But we already had a function for a gaussian function, and maybe we'll
discover that a linear background isn't sufficient which would mean the
-model function would have to be changed. As an alternative we could define
-a linear function::
+model function would have to be changed.
+
+Instead, lmfit allows models to be combined into a :class:`CompositeModel`.
+As an alternative to including a linear background in our model function,
+we could define a linear function::
def line(x, slope, intercept):
"a line"
@@ -1046,7 +802,6 @@ which prints out the results::
C(amp, wid) = 0.666
C(cen, intercept) = 0.129
-
and shows the plot on the left.
.. _figModel2:
@@ -1089,32 +844,20 @@ us to identify which parameter went with which component model. As we will
see in the next chapter, using composite models with the built-in models
provides a simple way to build up complex models.
-.. class:: CompositeModel(left, right, op[, **kws])
+.. autoclass:: CompositeModel(left, right, op[, **kws])
- Create a composite model from two models (`left` and `right` and an
- binary operator (`op`). Additional keywords are passed to
- :class:`Model`.
-
- :param left: Left-hand side Model.
- :type left: :class:`Model`
- :param right: Right-hand side Model.l
- :type right: :class:`Model`
- :param op: Binary operator.
- :type op: callable, and taking 2 arguments (`left` and `right`).
-
-Normally, one does not have to explicitly create a :class:`CompositeModel`,
-as doing::
+Note that when using builtin Python binary operators, a
+:class:`CompositeModel` will automatically be constructed for you. That is,
+doing::
mod = Model(fcn1) + Model(fcn2) * Model(fcn3)
-will automatically create a :class:`CompositeModel`. In this example,
-`mod.left` will be `Model(fcn1)`, `mod.op` will be :meth:`operator.add`,
-and `mod.right` will be another CompositeModel that has a `left` attribute
-of `Model(fcn2)`, an `op` of :meth:`operator.mul`, and a `right` of
-`Model(fcn3)`.
+will create a :class:`CompositeModel`. Here, `left` will be `Model(fcn1)`,
+`op` will be :meth:`operator.add`, and `right` will be another
+CompositeModel that has a `left` attribute of `Model(fcn2)`, an `op` of
+:meth:`operator.mul`, and a `right` of `Model(fcn3)`.
-If you want to use a binary operator other than add, subtract, multiply, or
-divide that are supported through normal Python syntax, you'll need to
+To use a binary operator other than '+', '-', '*', or '/' you can
explicitly create a :class:`CompositeModel` with the appropriate binary
operator. For example, to convolve two models, you could define a simple
convolution function, perhaps as::
@@ -1171,29 +914,3 @@ and shows the plots:
Using composite models with built-in or custom operators allows you to
build complex models from testable sub-components.
-
-
-Calculating uncertainties in the model function
-==============================================================
-
-Finally, we return to the first example above and ask not only for the
-uncertainties in the fitted parameters but for the range of values that
-those uncertainties mean for the model function itself. We can use the
-:meth:`ModelResult.eval_uncertainty` method of the model result object to
-evaluate the uncertainty in the model with a specified level for
-:math:`sigma`.
-
-That is, adding::
-
- dely = result.eval_uncertainty(sigma=3)
- plt.fill_between(x, result.best_fit-dely, result.best_fit+dely, color="#ABABAB")
-
-to the example fit to the Gaussian at the beginning of this chapter will
-give :math:`3-sigma` bands for the best-fit Gaussian, and produce the
-figure below.
-
-.. _figModel4:
-
- .. image:: _images/model_fit4.png
- :target: _images/model_fit4.png
- :width: 50%
diff --git a/doc/parameters.rst b/doc/parameters.rst
index 33a8c5e66..32062275f 100644
--- a/doc/parameters.rst
+++ b/doc/parameters.rst
@@ -6,74 +6,43 @@
:class:`Parameter` and :class:`Parameters`
================================================
-This chapter describes :class:`Parameter` objects which is the key concept
-of lmfit.
+This chapter describes :class:`Parameter` objects which is a key concept of
+lmfit.
A :class:`Parameter` is the quantity to be optimized in all minimization
problems, replacing the plain floating point number used in the
optimization routines from :mod:`scipy.optimize`. A :class:`Parameter` has
-a value that can be varied in the fit or a fixed value, and can have upper
-and/or lower bounds. It can even have a value that is constrained by an
-algebraic expression of other Parameter values. Since :class:`Parameters`
-live outside the core optimization routines, they can be used in **all**
-optimization routines from :mod:`scipy.optimize`. By using
-:class:`Parameter` objects instead of plain variables, the objective
-function does not have to be modified to reflect every change of what is
-varied in the fit. This simplifies the writing of models, allowing general
-models that describe the phenomenon to be written, and gives the user more
-flexibility in using and testing variations of that model.
+a value that can either be varied in the fit or held at a fixed value, and
+can have upper and/or lower bounds placd on the value. It can even have a
+value that is constrained by an algebraic expression of other Parameter
+values. Since :class:`Parameter` objects live outside the core
+optimization routines, they can be used in **all** optimization routines
+from :mod:`scipy.optimize`. By using :class:`Parameter` objects instead of
+plain variables, the objective function does not have to be modified to
+reflect every change of what is varied in the fit, or whether bounds can be
+applied. This simplifies the writing of models, allowing general models
+that describe the phenomenon and gives the user more flexibility in using
+and testing variations of that model.
Whereas a :class:`Parameter` expands on an individual floating point
-variable, the optimization methods need an ordered group of floating point
-variables. In the :mod:`scipy.optimize` routines this is required to be a
-1-dimensional numpy ndarray. For lmfit, where each :class:`Parameter` has
-a name, this is replaced by a :class:`Parameters` class, which works as an
+variable, the optimization methods still actually need an ordered group of
+floating point variables. In the :mod:`scipy.optimize` routines this is
+required to be a 1-dimensional numpy ndarray. In lmfit, this 1-dimensional
+array is replaced by a :class:`Parameters` object, which works as an
ordered dictionary of :class:`Parameter` objects, with a few additional
features and methods. That is, while the concept of a :class:`Parameter`
is central to lmfit, one normally creates and interacts with a
:class:`Parameters` instance that contains many :class:`Parameter` objects.
-A table of parameter values, bounds and other attributes can be
-printed using :meth:`Parameters.pretty_print`.
-
-Finally, the objective functions you write for lmfit will take an instance of
-:class:`Parameters` as its first argument.
+For example, the objective functions you write for lmfit will take an
+instance of :class:`Parameters` as its first argument. A table of
+parameter values, bounds and other attributes can be printed using
+:meth:`Parameters.pretty_print`.
The :class:`Parameter` class
========================================
-.. class:: Parameter(name=None[, value=None[, vary=True[, min=-np.inf[, max=np.inf[, expr=None[, brute_step=None]]]]]])
-
- create a Parameter object.
-
- :param name: Parameter name.
- :type name: ``None`` or string -- will be overwritten during fit if ``None``.
- :param value: The numerical value for the parameter.
- :param vary: Whether to vary the parameter or not.
- :type vary: boolean (``True``/``False``) [default ``True``]
- :param min: Lower bound for value (``-np.inf`` = no lower bound).
- :param max: Upper bound for value (``np.inf`` = no upper bound).
- :param expr: Mathematical expression to use to evaluate value during fit.
- :type expr: ``None`` or string
- :param brute_step: Step size for grid points in brute force method (``0`` = no step size).
-
- Each of these inputs is turned into an attribute of the same name.
-
- After a fit, a Parameter for a fitted variable (that is with ``vary =
- True``) may have its :attr:`value` attribute to hold the best-fit value.
- Depending on the success of the fit and fitting algorithm used, it may also
- have attributes :attr:`stderr` and :attr:`correl`.
-
- .. attribute:: stderr
-
- The estimated standard error for the best-fit value.
-
- .. attribute:: correl
-
- A dictionary of the correlation with the other fitted variables in the
- fit, of the form::
-
- {'decay': 0.404, 'phase': -0.020, 'frequency': 0.102}
+.. autoclass:: Parameter
See :ref:`bounds_chapter` for details on the math used to implement the
bounds with :attr:`min` and :attr:`max`.
@@ -85,146 +54,29 @@ The :class:`Parameter` class
.. index:: Removing a Constraint Expression
- .. method:: set(value=None[, vary=None[, min=None[, max=None[, expr=None[, brute_step=None]]]]])
-
- set or update a Parameter value or other attributes.
-
- :param name: Parameter name.
- :param value: The numerical value for the parameter.
- :param vary: Whether to vary the parameter or not.
- :param min: Lower bound for value.
- :param max: Upper bound for value.
- :param expr: Mathematical expression to use to evaluate value during fit.
- :param brute_step: Step size for grid points in brute force method.
-
- Each argument of :meth:`set` has a default value of ``None``, and will
- be set only if the provided value is not ``None``. You can use this to
- update some Parameter attribute without affecting others, for example::
-
- p1 = Parameter('a', value=2.0)
- p2 = Parameter('b', value=0.0)
- p1.set(min=0)
- p2.set(vary=False)
-
- to set a lower bound, or to set a Parameter as have a fixed value.
-
- Note that to use this approach to lift a lower or upper bound, doing::
-
- p1.set(min=0)
- .....
- # now lift the lower bound
- p1.set(min=None) # won't work! lower bound NOT changed
-
- won't work -- this will not change the current lower bound. Instead
- you'll have to use ``np.inf`` to remove a lower or upper bound::
-
- # now lift the lower bound
- p1.set(min=-np.inf) # will work!
-
- Similarly, to clear an expression of a parameter, you need to pass an
- empty string, not ``None``. You also need to give a value and
- explicitly tell it to vary::
-
- p3 = Parameter('c', expr='(a+b)/2')
- p3.set(expr=None) # won't work! expression NOT changed
-
- # remove constraint expression
- p3.set(value=1.0, vary=True, expr='') # will work! parameter now unconstrained
-
- Finally, to clear the step size, you need to pass ``0`` (`zero`) not ``None``::
-
- p4 = Parameter('d', value=5.0, brute_step=0.1))
- p4.set(brute_step=None) # won't work! step size NOT changed
-
- # remove step size
- p4.set(brute_step=0) # will work! parameter does not have a step size defined
+ .. automethod:: set
The :class:`Parameters` class
========================================
-.. class:: Parameters()
-
- Create a Parameters object. This is little more than a fancy ordered
- dictionary, with the restrictions that:
-
- 1. keys must be valid Python symbol names, so that they can be used in
- expressions of mathematical constraints. This means the names must
- match ``[a-z_][a-z0-9_]*`` and cannot be a Python reserved word.
-
- 2. values must be valid :class:`Parameter` objects.
-
- Two methods are provided for convenient initialization of a :class:`Parameters`,
- and one for extracting :class:`Parameter` values into a plain dictionary.
-
- .. method:: add(name[, value=None[, vary=True[, min=-np.inf[, max=np.inf[, expr=None[, brute_step=None]]]]]])
-
- Add a named parameter. This creates a :class:`Parameter`
- object associated with the key `name`, with optional arguments
- passed to :class:`Parameter`::
-
- p = Parameters()
- p.add('myvar', value=1, vary=True)
-
- .. method:: add_many(self, paramlist)
-
- Add a list of named parameters. Each entry must be a tuple
- with the following entries::
-
- name, value, vary, min, max, expr, brute_step
-
- This method is somewhat rigid and verbose (no default values), but can
- be useful when initially defining a parameter list so that it looks
- table-like::
-
- p = Parameters()
- # (Name, Value, Vary, Min, Max, Expr, Brute_step)
- p.add_many(('amp1', 10, True, None, None, None, None),
- ('cen1', 1.2, True, 0.5, 2.0, None, None),
- ('wid1', 0.8, True, 0.1, None, None, None),
- ('amp2', 7.5, True, None, None, None, None),
- ('cen2', 1.9, True, 1.0, 3.0, None, 0.1),
- ('wid2', None, False, None, None, '2*wid1/3', None))
-
-
- .. automethod:: Parameters.pretty_print
-
- .. method:: valuesdict()
-
- Return an ordered dictionary of name:value pairs with the
- Paramater name as the key and Parameter value as value.
-
- This is distinct from the :class:`Parameters` itself, as the dictionary
- values are not :class:`Parameter` objects, just the :attr:`value`.
- Using :meth:`valuesdict` can be a very convenient way to get updated
- values in a objective function.
-
- .. method:: dumps(**kws)
+.. autoclass:: Parameters
- Return a JSON string representation of the :class:`Parameter` object.
- This can be saved or used to re-create or re-set parameters, using the
- :meth:`loads` method.
+ .. automethod:: add
- Optional keywords are sent :py:func:`json.dumps`.
+ .. automethod:: add_many
- .. method:: dump(file, **kws)
+ .. automethod:: pretty_print
- Write a JSON representation of the :class:`Parameter` object to a file
- or file-like object in `file` -- really any object with a :meth:`write`
- method. Optional keywords are sent :py:func:`json.dumps`.
+ .. automethod:: valuesdict
- .. method:: loads(sval, **kws)
+ .. automethod:: dumps
- Use a JSON string representation of the :class:`Parameter` object in
- `sval` to set all parameter settings. Optional keywords are sent
- :py:func:`json.loads`.
+ .. automethod:: dump
- .. method:: load(file, **kws)
+ .. automethod:: loads
- Read and use a JSON string representation of the :class:`Parameter`
- object from a file or file-like object in `file` -- really any object
- with a :meth:`read` method. Optional keywords are sent
- :py:func:`json.loads`.
+ .. automethod:: load
Simple Example
diff --git a/doc/sphinx/ext_mathjax.py b/doc/sphinx/ext_mathjax.py
index 40de659bf..1bc8b9f96 100644
--- a/doc/sphinx/ext_mathjax.py
+++ b/doc/sphinx/ext_mathjax.py
@@ -1,10 +1,9 @@
# sphinx extensions for mathjax
+
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.intersphinx',
- 'numpydoc']
-mathjax = 'sphinx.ext.mathjax'
-pngmath = 'sphinx.ext.pngmath'
-
-extensions.append(mathjax)
+ 'sphinx.ext.extlinks',
+ 'sphinx.ext.napoleon',
+ 'sphinx.ext.mathjax']
diff --git a/doc/sphinx/ext_pngmath.py b/doc/sphinx/ext_pngmath.py
index cf153fe8a..8cf169a73 100644
--- a/doc/sphinx/ext_pngmath.py
+++ b/doc/sphinx/ext_pngmath.py
@@ -1,10 +1,9 @@
# sphinx extensions for pngmath
+
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.intersphinx',
- 'numpydoc']
-mathjax = 'sphinx.ext.mathjax'
-pngmath = 'sphinx.ext.pngmath'
-
-extensions.append(pngmath)
+ 'sphinx.ext.extlinks',
+ 'sphinx.ext.napoleon',
+ 'sphinx.ext.pngmath']
diff --git a/doc/whatsnew.rst b/doc/whatsnew.rst
index 063c6c9ea..5f3f27be4 100644
--- a/doc/whatsnew.rst
+++ b/doc/whatsnew.rst
@@ -11,6 +11,44 @@ changes to the use and behavior of the library. This is not meant to be a
comprehensive list of changes. For such a complete record, consult the
`lmfit github repository`_.
+.. _whatsnew_096_label:
+
+Version 0.9.6 Release Notes
+==========================================
+
+Support for scipy 0.14 has been dropped: scipy 0.15 is not required. This
+is especially important for lmfit maintenance, as it means we can now rely
+on scipy having code for differential evolution and do not need to keep a
+local copy.
+
+A brute force method was added, which can be used either with
+:meth:`Minimizer.brute` or using the `method='brute'` option to
+:meth:`Minimizer.minimize`. This method requires that finite bounds be
+placed on each variable parameter, and that the parameter has a finite
+`brute_step` attribute set to specify the step size.
+
+Custom cost functions can now be used for the scalar minimizers using the
+`reduce_fcn` option.
+
+Many improvements to documentation and docstrings in the code were made.
+As part of that effort, all API documentation in this main sphinx
+documentation now derives from the docstrings.
+
+Uncertainties in the resulting best-fit for a model can now be calculated
+from the uncertainties in the model parameters.
+
+Parameters now have two new attributes: `brute_step` to specify the step
+size to take with the `brute` method, and `user_data` which is unused but
+can be used to hold additional information the user may desire. This will
+be preserved on copy and pickling.
+
+Several bug fixes and cleanups.
+
+Versioneer was updated to 0.18.
+
+Tests can now be run either with nose or pytest.
+
+
.. _whatsnew_095_label:
Version 0.9.5 Release Notes
diff --git a/examples/doc_basic.py b/examples/doc_basic.py
index 67c3a3b09..097d5706e 100644
--- a/examples/doc_basic.py
+++ b/examples/doc_basic.py
@@ -28,10 +28,8 @@ def fcn2min(params, x, data):
# do fit, here with leastsq model
minner = Minimizer(fcn2min, params, fcn_args=(x, data))
-kws = {'options': {'maxiter':10}}
result = minner.minimize()
-
# calculate final result
final = data + result.residual
diff --git a/examples/doc_model1.py b/examples/doc_model1.py
index 6561f1a14..6850e8c42 100644
--- a/examples/doc_model1.py
+++ b/examples/doc_model1.py
@@ -11,10 +11,10 @@
def gaussian(x, amp, cen, wid):
"1-d gaussian: gaussian(x, amp, cen, wid)"
- return (amp/(sqrt(2*pi)*wid)) * exp(-(x-cen)**2 /(2*wid**2))
+ return amp * exp(-(x-cen)**2 /wid)
-gmod = Model(gaussian)
-result = gmod.fit(y, x=x, amp=5, cen=5, wid=1)
+gmodel = Model(gaussian)
+result = gmodel.fit(y, params, x=x, amp=5, cen=5, wid=1)
print(result.fit_report())
diff --git a/examples/doc_nistgauss.py b/examples/doc_nistgauss.py
index 861c3cf4d..03d5d7dd7 100644
--- a/examples/doc_nistgauss.py
+++ b/examples/doc_nistgauss.py
@@ -36,8 +36,14 @@
out = mod.fit(y, pars, x=x)
+comps = out.eval_components(x=x)
+
print(out.fit_report(min_correl=0.5))
plt.plot(x, out.best_fit, 'r-')
+plt.plot(x, comps['g1_'], 'b--')
+plt.plot(x, comps['g2_'], 'b--')
+plt.plot(x, comps['exp_'], 'k--')
+
plt.show()
#
diff --git a/examples/example_diffev.py b/examples/example_diffev.py
new file mode 100644
index 000000000..6525f172d
--- /dev/null
+++ b/examples/example_diffev.py
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+"""
+Example comparing leastsq with differential_evolution
+on a fairly simple problem.
+"""
+import numpy as np
+import scipy.special as sp
+import scipy.stats as stats
+import lmfit
+
+try:
+ import matplotlib.pyplot as plt
+ HAS_PYLAB = True
+except ImportError:
+ HAS_PYLAB = False
+
+
+np.random.seed(2)
+x = np.linspace(0, 10, 101)
+
+# Setup example
+decay = 5
+offset = 1.0
+amp = 2.0
+omega = 4.0
+
+y = offset + amp*np.sin(omega * x) * np.exp(-x / decay)
+yn = y + np.random.normal(size=len(y), scale=0.450)
+
+def resid(params, x, ydata):
+ decay = params['decay'].value
+ offset= params['offset'].value
+ omega = params['omega'].value
+ amp = params['amp'].value
+
+ y_model = offset + amp * np.sin(x*omega) * np.exp(-x/decay)
+ return y_model - ydata
+
+params = lmfit.Parameters()
+
+params.add('offset', 2.0, min=0, max=10.0)
+params.add('omega', 3.3, min=0, max=10.0)
+params.add('amp', 2.5, min=0, max=10.0)
+params.add('decay', 1.0, min=0), max=10.0)
+
+o1 = lmfit.minimize(resid, params, args=(x, yn), method='leastsq')
+print("# Fit using leastsq:")
+lmfit.report_fit(o1)
+
+o2 = lmfit.minimize(resid, params, args=(x, yn), method='differential_evolution')
+print("# Fit using differential_evolution:")
+lmfit.report_fit(o2)
+
+if HAS_PYLAB:
+ plt.plot(x, yn, 'ko', lw=2)
+ plt.plot(x, yn+o1.residual, 'r-', lw=2)
+ plt.plot(x, yn+o2.residual, 'b-', lw=2)
+ plt.legend(['data', 'leastsq', 'diffev'],
+ loc='upper left')
+ plt.show()
diff --git a/lmfit/_differentialevolution.py b/lmfit/_differentialevolution.py
deleted file mode 100644
index e7fd12c6a..000000000
--- a/lmfit/_differentialevolution.py
+++ /dev/null
@@ -1,752 +0,0 @@
-"""
-differential_evolution: The differential evolution global optimization algorithm
-Added by Andrew Nelson 2014
-"""
-from __future__ import absolute_import, division, print_function
-
-import numbers
-
-import numpy as np
-from scipy.optimize import minimize
-from scipy.optimize.optimize import _status_message
-
-__all__ = ['differential_evolution']
-
-_MACHEPS = np.finfo(np.float64).eps
-
-
-#------------------------------------------------------------------------------
-# scipy.optimize does not contain OptimizeResult until 0.14. Include here as a
-# fix for scipy < 0.14.
-
-class OptimizeResult(dict):
- """ Represents the optimization result.
- Attributes
- ----------
- x : ndarray
- The solution of the optimization.
- success : bool
- Whether or not the optimizer exited successfully.
- status : int
- Termination status of the optimizer. Its value depends on the
- underlying solver. Refer to `message` for details.
- message : str
- Description of the cause of the termination.
- fun, jac, hess, hess_inv : ndarray
- Values of objective function, Jacobian, Hessian or its inverse (if
- available). The Hessians may be approximations, see the documentation
- of the function in question.
- nfev, njev, nhev : int
- Number of evaluations of the objective functions and of its
- Jacobian and Hessian.
- nit : int
- Number of iterations performed by the optimizer.
- maxcv : float
- The maximum constraint violation.
- Notes
- -----
- There may be additional attributes not listed above depending of the
- specific solver. Since this class is essentially a subclass of dict
- with attribute accessors, one can see which attributes are available
- using the `keys()` method.
- """
- def __getattr__(self, name):
- try:
- return self[name]
- except KeyError:
- raise AttributeError(name)
-
- __setattr__ = dict.__setitem__
- __delattr__ = dict.__delitem__
-
- def __repr__(self):
- if self.keys():
- m = max(map(len, list(self.keys()))) + 1
- return '\n'.join([k.rjust(m) + ': ' + repr(v)
- for k, v in self.items()])
- else:
- return self.__class__.__name__ + "()"
-#------------------------------------------------------------------------------
-
-
-def differential_evolution(func, bounds, args=(), strategy='best1bin',
- maxiter=None, popsize=15, tol=0.01,
- mutation=(0.5, 1), recombination=0.7, seed=None,
- callback=None, disp=False, polish=True,
- init='latinhypercube'):
- """Finds the global minimum of a multivariate function.
- Differential Evolution is stochastic in nature (does not use gradient
- methods) to find the minimium, and can search large areas of candidate
- space, but often requires larger numbers of function evaluations than
- conventional gradient based techniques.
-
- The algorithm is due to Storn and Price [1]_.
-
- Parameters
- ----------
- func : callable
- The objective function to be minimized. Must be in the form
- ``f(x, *args)``, where ``x`` is the argument in the form of a 1-D array
- and ``args`` is a tuple of any additional fixed parameters needed to
- completely specify the function.
- bounds : sequence
- Bounds for variables. ``(min, max)`` pairs for each element in ``x``,
- defining the lower and upper bounds for the optimizing argument of
- `func`. It is required to have ``len(bounds) == len(x)``.
- ``len(bounds)`` is used to determine the number of parameters in ``x``.
- args : tuple, optional
- Any additional fixed parameters needed to
- completely specify the objective function.
- strategy : str, optional
- The differential evolution strategy to use. Should be one of:
-
- - 'best1bin'
- - 'best1exp'
- - 'rand1exp'
- - 'randtobest1exp'
- - 'best2exp'
- - 'rand2exp'
- - 'randtobest1bin'
- - 'best2bin'
- - 'rand2bin'
- - 'rand1bin'
-
- The default is 'best1bin'.
- maxiter : int, optional
- The maximum number of times the entire population is evolved.
- The maximum number of function evaluations is:
- ``maxiter * popsize * len(x)``
- popsize : int, optional
- A multiplier for setting the total population size. The population has
- ``popsize * len(x)`` individuals.
- tol : float, optional
- When the mean of the population energies, multiplied by tol,
- divided by the standard deviation of the population energies
- is greater than 1 the solving process terminates:
- ``convergence = mean(pop) * tol / stdev(pop) > 1``
- mutation : float or tuple(float, float), optional
- The mutation constant.
- If specified as a float it should be in the range [0, 2].
- If specified as a tuple ``(min, max)`` dithering is employed. Dithering
- randomly changes the mutation constant on a generation by generation
- basis. The mutation constant for that generation is taken from
- ``U[min, max)``. Dithering can help speed convergence significantly.
- Increasing the mutation constant increases the search radius, but will
- slow down convergence.
- recombination : float, optional
- The recombination constant, should be in the range [0, 1]. Increasing
- this value allows a larger number of mutants to progress into the next
- generation, but at the risk of population stability.
- seed : int or `np.random.RandomState`, optional
- If `seed` is not specified the `np.RandomState` singleton is used.
- If `seed` is an int, a new `np.random.RandomState` instance is used,
- seeded with seed.
- If `seed` is already a `np.random.RandomState instance`, then that
- `np.random.RandomState` instance is used.
- Specify `seed` for repeatable minimizations.
- disp : bool, optional
- Display status messages
- callback : callable, `callback(xk, convergence=val)`, optional:
- A function to follow the progress of the minimization. ``xk`` is
- the current value of ``x0``. ``val`` represents the fractional
- value of the population convergence. When ``val`` is greater than one
- the function halts. If callback returns `True`, then the minimization
- is halted (any polishing is still carried out).
- polish : bool, optional
- If True (default), then `scipy.optimize.minimize` with the `L-BFGS-B`
- method is used to polish the best population member at the end, which
- can improve the minimization slightly.
- init : string, optional
- Specify how the population initialization is performed. Should be
- one of:
-
- - 'latinhypercube'
- - 'random'
-
- The default is 'latinhypercube'. Latin Hypercube sampling tries to
- maximize coverage of the available parameter space. 'random' initializes
- the population randomly - this has the drawback that clustering can
- occur, preventing the whole of parameter space being covered.
-
- Returns
- -------
- res : OptimizeResult
- The optimization result represented as a `OptimizeResult` object.
- Important attributes are: ``x`` the solution array, ``success`` a
- Boolean flag indicating if the optimizer exited successfully and
- ``message`` which describes the cause of the termination. See
- `OptimizeResult` for a description of other attributes. If `polish`
- was employed, then OptimizeResult also contains the `jac` attribute.
-
- Notes
- -----
- Differential evolution is a stochastic population based method that is
- useful for global optimization problems. At each pass through the population
- the algorithm mutates each candidate solution by mixing with other candidate
- solutions to create a trial candidate. There are several strategies [2]_ for
- creating trial candidates, which suit some problems more than others. The
- 'best1bin' strategy is a good starting point for many systems. In this
- strategy two members of the population are randomly chosen. Their difference
- is used to mutate the best member (the `best` in `best1bin`), :math:`b_0`,
- so far:
-
- .. math::
-
- b' = b_0 + mutation * (population[rand0] - population[rand1])
-
- A trial vector is then constructed. Starting with a randomly chosen 'i'th
- parameter the trial is sequentially filled (in modulo) with parameters from
- `b'` or the original candidate. The choice of whether to use `b'` or the
- original candidate is made with a binomial distribution (the 'bin' in
- 'best1bin') - a random number in [0, 1) is generated. If this number is
- less than the `recombination` constant then the parameter is loaded from
- `b'`, otherwise it is loaded from the original candidate. The final
- parameter is always loaded from `b'`. Once the trial candidate is built
- its fitness is assessed. If the trial is better than the original candidate
- then it takes its place. If it is also better than the best overall
- candidate it also replaces that.
- To improve your chances of finding a global minimum use higher `popsize`
- values, with higher `mutation` and (dithering), but lower `recombination`
- values. This has the effect of widening the search radius, but slowing
- convergence.
-
- .. versionadded:: 0.15.0
-
- Examples
- --------
- Let us consider the problem of minimizing the Rosenbrock function. This
- function is implemented in `rosen` in `scipy.optimize`.
-
- >>> from scipy.optimize import rosen, differential_evolution
- >>> bounds = [(0,2), (0, 2), (0, 2), (0, 2), (0, 2)]
- >>> result = differential_evolution(rosen, bounds)
- >>> result.x, result.fun
- (array([1., 1., 1., 1., 1.]), 1.9216496320061384e-19)
-
- Next find the minimum of the Ackley function
- (http://en.wikipedia.org/wiki/Test_functions_for_optimization).
-
- >>> from scipy.optimize import differential_evolution
- >>> import numpy as np
- >>> def ackley(x):
- ... arg1 = -0.2 * np.sqrt(0.5 * (x[0] ** 2 + x[1] ** 2))
- ... arg2 = 0.5 * (np.cos(2. * np.pi * x[0]) + np.cos(2. * np.pi * x[1]))
- ... return -20. * np.exp(arg1) - np.exp(arg2) + 20. + np.e
- >>> bounds = [(-5, 5), (-5, 5)]
- >>> result = differential_evolution(ackley, bounds)
- >>> result.x, result.fun
- (array([ 0., 0.]), 4.4408920985006262e-16)
-
- References
- ----------
- .. [1] Storn, R and Price, K, Differential Evolution - a Simple and
- Efficient Heuristic for Global Optimization over Continuous Spaces,
- Journal of Global Optimization, 1997, 11, 341 - 359.
- .. [2] http://www1.icsi.berkeley.edu/~storn/code.html
- .. [3] http://en.wikipedia.org/wiki/Differential_evolution
- """
-
- solver = DifferentialEvolutionSolver(func, bounds, args=args,
- strategy=strategy, maxiter=maxiter,
- popsize=popsize, tol=tol,
- mutation=mutation,
- recombination=recombination,
- seed=seed, polish=polish,
- callback=callback,
- disp=disp,
- init=init)
- return solver.solve()
-
-
-class DifferentialEvolutionSolver(object):
-
- """This class implements the differential evolution solver
-
- Parameters
- ----------
- func : callable
- The objective function to be minimized. Must be in the form
- ``f(x, *args)``, where ``x`` is the argument in the form of a 1-D array
- and ``args`` is a tuple of any additional fixed parameters needed to
- completely specify the function.
- bounds : sequence
- Bounds for variables. ``(min, max)`` pairs for each element in ``x``,
- defining the lower and upper bounds for the optimizing argument of
- `func`. It is required to have ``len(bounds) == len(x)``.
- ``len(bounds)`` is used to determine the number of parameters in ``x``.
- args : tuple, optional
- Any additional fixed parameters needed to
- completely specify the objective function.
- strategy : str, optional
- The differential evolution strategy to use. Should be one of:
-
- - 'best1bin'
- - 'best1exp'
- - 'rand1exp'
- - 'randtobest1exp'
- - 'best2exp'
- - 'rand2exp'
- - 'randtobest1bin'
- - 'best2bin'
- - 'rand2bin'
- - 'rand1bin'
-
- The default is 'best1bin'
-
- maxiter : int, optional
- The maximum number of times the entire population is evolved. The
- maximum number of function evaluations is:
- ``maxiter * popsize * len(x)``
- popsize : int, optional
- A multiplier for setting the total population size. The population has
- ``popsize * len(x)`` individuals.
- tol : float, optional
- When the mean of the population energies, multiplied by tol,
- divided by the standard deviation of the population energies
- is greater than 1 the solving process terminates:
- ``convergence = mean(pop) * tol / stdev(pop) > 1``
- mutation : float or tuple(float, float), optional
- The mutation constant.
- If specified as a float it should be in the range [0, 2].
- If specified as a tuple ``(min, max)`` dithering is employed. Dithering
- randomly changes the mutation constant on a generation by generation
- basis. The mutation constant for that generation is taken from
- U[min, max). Dithering can help speed convergence significantly.
- Increasing the mutation constant increases the search radius, but will
- slow down convergence.
- recombination : float, optional
- The recombination constant, should be in the range [0, 1]. Increasing
- this value allows a larger number of mutants to progress into the next
- generation, but at the risk of population stability.
- seed : int or `np.random.RandomState`, optional
- If `seed` is not specified the `np.random.RandomState` singleton is
- used.
- If `seed` is an int, a new `np.random.RandomState` instance is used,
- seeded with `seed`.
- If `seed` is already a `np.random.RandomState` instance, then that
- `np.random.RandomState` instance is used.
- Specify `seed` for repeatable minimizations.
- disp : bool, optional
- Display status messages
- callback : callable, `callback(xk, convergence=val)`, optional
- A function to follow the progress of the minimization. ``xk`` is
- the current value of ``x0``. ``val`` represents the fractional
- value of the population convergence. When ``val`` is greater than one
- the function halts. If callback returns `True`, then the minimization
- is halted (any polishing is still carried out).
- polish : bool, optional
- If True, then `scipy.optimize.minimize` with the `L-BFGS-B` method
- is used to polish the best population member at the end. This requires
- a few more function evaluations.
- maxfun : int, optional
- Set the maximum number of function evaluations. However, it probably
- makes more sense to set `maxiter` instead.
- init : string, optional
- Specify which type of population initialization is performed. Should be
- one of:
-
- - 'latinhypercube'
- - 'random'
- """
-
- # Dispatch of mutation strategy method (binomial or exponential).
- _binomial = {'best1bin': '_best1',
- 'randtobest1bin': '_randtobest1',
- 'best2bin': '_best2',
- 'rand2bin': '_rand2',
- 'rand1bin': '_rand1'}
- _exponential = {'best1exp': '_best1',
- 'rand1exp': '_rand1',
- 'randtobest1exp': '_randtobest1',
- 'best2exp': '_best2',
- 'rand2exp': '_rand2'}
-
- def __init__(self, func, bounds, args=(),
- strategy='best1bin', maxiter=None, popsize=15,
- tol=0.01, mutation=(0.5, 1), recombination=0.7, seed=None,
- maxfun=None, callback=None, disp=False, polish=True,
- init='latinhypercube'):
-
- if strategy in self._binomial:
- self.mutation_func = getattr(self, self._binomial[strategy])
- elif strategy in self._exponential:
- self.mutation_func = getattr(self, self._exponential[strategy])
- else:
- raise ValueError("Please select a valid mutation strategy")
- self.strategy = strategy
-
- self.callback = callback
- self.polish = polish
- self.tol = tol
-
- #Mutation constant should be in [0, 2). If specified as a sequence
- #then dithering is performed.
- self.scale = mutation
- if (not np.all(np.isfinite(mutation)) or
- np.any(np.array(mutation) >= 2) or
- np.any(np.array(mutation) < 0)):
- raise ValueError('The mutation constant must be a float in '
- 'U[0, 2), or specified as a tuple(min, max)'
- ' where min < max and min, max are in U[0, 2).')
-
- self.dither = None
- if hasattr(mutation, '__iter__') and len(mutation) > 1:
- self.dither = [mutation[0], mutation[1]]
- self.dither.sort()
-
- self.cross_over_probability = recombination
-
- self.func = func
- self.args = args
-
- # convert tuple of lower and upper bounds to limits
- # [(low_0, high_0), ..., (low_n, high_n]
- # -> [[low_0, ..., low_n], [high_0, ..., high_n]]
- self.limits = np.array(bounds, dtype='float').T
- if (np.size(self.limits, 0) != 2
- or not np.all(np.isfinite(self.limits))):
- raise ValueError('bounds should be a sequence containing '
- 'real valued (min, max) pairs for each value'
- ' in x')
-
- self.maxiter = maxiter or 1000
- self.maxfun = (maxfun or ((self.maxiter + 1) * popsize *
- np.size(self.limits, 1)))
-
- # population is scaled to between [0, 1].
- # We have to scale between parameter <-> population
- # save these arguments for _scale_parameter and
- # _unscale_parameter. This is an optimization
- self.__scale_arg1 = 0.5 * (self.limits[0] + self.limits[1])
- self.__scale_arg2 = np.fabs(self.limits[0] - self.limits[1])
-
- parameter_count = np.size(self.limits, 1)
- self.random_number_generator = _make_random_gen(seed)
-
- #default initialization is a latin hypercube design, but there
- #are other population initializations possible.
- self.population = np.zeros((popsize * parameter_count,
- parameter_count))
- if init == 'latinhypercube':
- self.init_population_lhs()
- elif init == 'random':
- self.init_population_random()
- else:
- raise ValueError("The population initialization method must be one"
- "of 'latinhypercube' or 'random'")
-
- self.population_energies = np.ones(
- popsize * parameter_count) * np.inf
-
- self.disp = disp
-
- def init_population_lhs(self):
- """
- Initializes the population with Latin Hypercube Sampling
- Latin Hypercube Sampling ensures that the sampling of parameter space
- is maximised.
- """
- samples = np.size(self.population, 0)
- N = np.size(self.population, 1)
- rng = self.random_number_generator
-
- # Generate the intervals
- segsize = 1.0 / samples
-
- # Fill points uniformly in each interval
- rdrange = rng.rand(samples, N) * segsize
- rdrange += np.atleast_2d(np.arange(0., 1., segsize)).T
-
- # Make the random pairings
- self.population = np.zeros_like(rdrange)
-
- for j in range(N):
- order = rng.permutation(range(samples))
- self.population[:, j] = rdrange[order, j]
-
- def init_population_random(self):
- """
- Initialises the population at random. This type of initialization
- can possess clustering, Latin Hypercube sampling is generally better.
- """
- rng = self.random_number_generator
- self.population = rng.random_sample(self.population.shape)
-
- @property
- def x(self):
- """
- The best solution from the solver
-
- Returns
- -------
- x - ndarray
- The best solution from the solver.
- """
- return self._scale_parameters(self.population[0])
-
- def solve(self):
- """
- Runs the DifferentialEvolutionSolver.
-
- Returns
- -------
- res : OptimizeResult
- The optimization result represented as a ``OptimizeResult`` object.
- Important attributes are: ``x`` the solution array, ``success`` a
- Boolean flag indicating if the optimizer exited successfully and
- ``message`` which describes the cause of the termination. See
- `OptimizeResult` for a description of other attributes. If polish
- was employed, then OptimizeResult also contains the ``hess_inv`` and
- ``jac`` attributes.
- """
-
- nfev, nit, warning_flag = 0, 0, False
- status_message = _status_message['success']
-
- # calculate energies to start with
- for index, candidate in enumerate(self.population):
- parameters = self._scale_parameters(candidate)
- self.population_energies[index] = self.func(parameters,
- *self.args)
- nfev += 1
-
- if nfev > self.maxfun:
- warning_flag = True
- status_message = _status_message['maxfev']
- break
-
- minval = np.argmin(self.population_energies)
-
- # put the lowest energy into the best solution position.
- lowest_energy = self.population_energies[minval]
- self.population_energies[minval] = self.population_energies[0]
- self.population_energies[0] = lowest_energy
-
- self.population[[0, minval], :] = self.population[[minval, 0], :]
-
- if warning_flag:
- return OptimizeResult(
- x=self.x,
- fun=self.population_energies[0],
- nfev=nfev,
- nit=nit,
- message=status_message,
- success=(warning_flag != True))
-
- # do the optimisation.
- for nit in range(1, self.maxiter + 1):
- if self.dither is not None:
- self.scale = self.random_number_generator.rand(
- ) * (self.dither[1] - self.dither[0]) + self.dither[0]
- for candidate in range(np.size(self.population, 0)):
- if nfev > self.maxfun:
- warning_flag = True
- status_message = _status_message['maxfev']
- break
-
- trial = self._mutate(candidate)
- self._ensure_constraint(trial)
- parameters = self._scale_parameters(trial)
-
- energy = self.func(parameters, *self.args)
- nfev += 1
-
- if energy < self.population_energies[candidate]:
- self.population[candidate] = trial
- self.population_energies[candidate] = energy
-
- if energy < self.population_energies[0]:
- self.population_energies[0] = energy
- self.population[0] = trial
-
- # stop when the fractional s.d. of the population is less than tol
- # of the mean energy
- convergence = (np.std(self.population_energies) /
- np.abs(np.mean(self.population_energies) +
- _MACHEPS))
-
- if self.disp:
- print("differential_evolution step %d: f(x)= %g"
- % (nit,
- self.population_energies[0]))
-
- if (self.callback and
- self.callback(self._scale_parameters(self.population[0]),
- convergence=self.tol / convergence) is True):
-
- warning_flag = True
- status_message = ('callback function requested stop early '
- 'by returning True')
- break
-
- if convergence < self.tol or warning_flag:
- break
-
- else:
- status_message = _status_message['maxiter']
- warning_flag = True
-
- DE_result = OptimizeResult(
- x=self.x,
- fun=self.population_energies[0],
- nfev=nfev,
- nit=nit,
- message=status_message,
- success=(warning_flag != True))
-
- if self.polish:
- result = minimize(self.func,
- np.copy(DE_result.x),
- method='L-BFGS-B',
- bounds=self.limits.T,
- args=self.args)
-
- nfev += result.nfev
- DE_result.nfev = nfev
-
- if result.fun < DE_result.fun:
- DE_result.fun = result.fun
- DE_result.x = result.x
- DE_result.jac = result.jac
- # to keep internal state consistent
- self.population_energies[0] = result.fun
- self.population[0] = self._unscale_parameters(result.x)
-
- return DE_result
-
- def _scale_parameters(self, trial):
- """
- scale from a number between 0 and 1 to parameters
- """
- return self.__scale_arg1 + (trial - 0.5) * self.__scale_arg2
-
- def _unscale_parameters(self, parameters):
- """
- scale from parameters to a number between 0 and 1.
- """
- return (parameters - self.__scale_arg1) / self.__scale_arg2 + 0.5
-
- def _ensure_constraint(self, trial):
- """
- make sure the parameters lie between the limits
- """
- for index, param in enumerate(trial):
- if param > 1 or param < 0:
- trial[index] = self.random_number_generator.rand()
-
- def _mutate(self, candidate):
- """
- create a trial vector based on a mutation strategy
- """
- trial = np.copy(self.population[candidate])
- parameter_count = np.size(trial, 0)
-
- fill_point = self.random_number_generator.randint(0, parameter_count)
-
- if (self.strategy == 'randtobest1exp'
- or self.strategy == 'randtobest1bin'):
- bprime = self.mutation_func(candidate,
- self._select_samples(candidate, 5))
- else:
- bprime = self.mutation_func(self._select_samples(candidate, 5))
-
- if self.strategy in self._binomial:
- crossovers = self.random_number_generator.rand(parameter_count)
- crossovers = crossovers < self.cross_over_probability
- # the last one is always from the bprime vector for binomial
- # If you fill in modulo with a loop you have to set the last one to
- # true. If you don't use a loop then you can have any random entry
- # be True.
- crossovers[fill_point] = True
- trial = np.where(crossovers, bprime, trial)
- return trial
-
- elif self.strategy in self._exponential:
- i = 0
- while (i < parameter_count and
- self.random_number_generator.rand() <
- self.cross_over_probability):
-
- trial[fill_point] = bprime[fill_point]
- fill_point = (fill_point + 1) % parameter_count
- i += 1
-
- return trial
-
- def _best1(self, samples):
- """
- best1bin, best1exp
- """
- r0, r1 = samples[:2]
- return (self.population[0] + self.scale *
- (self.population[r0] - self.population[r1]))
-
- def _rand1(self, samples):
- """
- rand1bin, rand1exp
- """
- r0, r1, r2 = samples[:3]
- return (self.population[r0] + self.scale *
- (self.population[r1] - self.population[r2]))
-
- def _randtobest1(self, candidate, samples):
- """
- randtobest1bin, randtobest1exp
- """
- r0, r1 = samples[:2]
- bprime = np.copy(self.population[candidate])
- bprime += self.scale * (self.population[0] - bprime)
- bprime += self.scale * (self.population[r0] -
- self.population[r1])
- return bprime
-
- def _best2(self, samples):
- """
- best2bin, best2exp
- """
- r0, r1, r2, r3 = samples[:4]
- bprime = (self.population[0] + self.scale *
- (self.population[r0] + self.population[r1]
- - self.population[r2] - self.population[r3]))
-
- return bprime
-
- def _rand2(self, samples):
- """
- rand2bin, rand2exp
- """
- r0, r1, r2, r3, r4 = samples
- bprime = (self.population[r0] + self.scale *
- (self.population[r1] + self.population[r2] -
- self.population[r3] - self.population[r4]))
-
- return bprime
-
- def _select_samples(self, candidate, number_samples):
- """
- obtain random integers from range(np.size(self.population, 0)),
- without replacement. You can't have the original candidate either.
- """
- idxs = list(range(np.size(self.population, 0)))
- idxs.remove(candidate)
- self.random_number_generator.shuffle(idxs)
- idxs = idxs[:number_samples]
- return idxs
-
-
-def _make_random_gen(seed):
- """Turn seed into a np.random.RandomState instance
-
- If seed is None, return the RandomState singleton used by np.random.
- If seed is an int, return a new RandomState instance seeded with seed.
- If seed is already a RandomState instance, return it.
- Otherwise raise ValueError.
- """
- if seed is None or seed is np.random:
- return np.random.mtrand._rand
- if isinstance(seed, (numbers.Integral, np.integer)):
- return np.random.RandomState(seed)
- if isinstance(seed, np.random.RandomState):
- return seed
- raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
- ' instance' % seed)
diff --git a/lmfit/minimizer.py b/lmfit/minimizer.py
index c024b0150..558d9d656 100644
--- a/lmfit/minimizer.py
+++ b/lmfit/minimizer.py
@@ -6,9 +6,10 @@
The user sets up a model in terms of instance of Parameters and writes a
function-to-be-minimized (residual function) in terms of these Parameters.
+Original copyright:
Copyright (c) 2011 Matthew Newville, The University of Chicago
-
+See LICENSE for more complete authorship information and license.
"""
from collections import namedtuple
@@ -33,19 +34,15 @@
from .parameter import Parameter, Parameters
# scipy version notes:
-# currently scipy 0.14 is required.
+# currently scipy 0.15 is required.
# feature scipy version added
# minimize 0.11
# OptimizeResult 0.13
# diff_evolution 0.15
# least_squares 0.17
-# differential_evolution is only present in scipy >= 0.15
-try:
- from scipy.optimize import differential_evolution as scipy_diffev
-except ImportError:
- from ._differentialevolution import differential_evolution as scipy_diffev
+from scipy.optimize import differential_evolution
# check for scipy.opitimize.least_squares
HAS_LEAST_SQUARES = False
@@ -128,21 +125,6 @@ def __str__(self):
return "\n%s" % self.msg
-def _differential_evolution(func, x0, **kwds):
- """A wrapper for differential_evolution that can be used with
- scipy.minimize."""
- kwargs = dict(args=(), strategy='best1bin', maxiter=None, popsize=15,
- tol=0.01, mutation=(0.5, 1), recombination=0.7, seed=None,
- callback=None, disp=False, polish=True,
- init='latinhypercube')
-
- for k, v in kwds.items():
- if k in kwargs:
- kwargs[k] = v
-
- return scipy_diffev(func, kwds['bounds'], **kwargs)
-
-
SCALAR_METHODS = {'nelder': 'Nelder-Mead',
'powell': 'Powell',
'cg': 'CG',
@@ -189,12 +171,11 @@ def reduce_cauchylogpdf(r):
class MinimizerResult(object):
r"""
- A class that holds the results of a minimization.
+ The results of a minimization.
- This is a plain container (with no methods of its own) that
- simply holds the results of the minimization. Fit results
- include data such as status and error messages, fit statistics,
- and the updated (i.e., best-fit) parameters themselves :attr:`params`.
+ Minimization results include data such as status and error messages,
+ fit statistics, and the updated (i.e., best-fit) parameters themselves
+ :attr:`params`.
The list of (possible) `MinimizerResult` attributes follows.
@@ -207,11 +188,11 @@ class MinimizerResult(object):
underlying solver. Refer to `message` for details.
var_names : list
Ordered list of variable parameter names used in optimization, and
- useful for understanding the the values in :attr:`init_vals` and
+ useful for understanding the values in :attr:`init_vals` and
:attr:`covar`.
covar : numpy.ndarray
Covariance matrix from minimization (`leastsq` only), with
- rows/columns using :attr:`var_names`.
+ rows and columns corresponding to :attr:`var_names`.
init_vals : list
List of initial values for variable parameters using :attr:`var_names`.
init_values : dict
@@ -236,7 +217,7 @@ class MinimizerResult(object):
Degrees of freedom in fit: :math:`N - N_{\\rm varys}`.
residual : numpy.ndarray
Residual array :math:`{\\rm Resid_i}`. Return value of the objective
- function.
+ function when using the best-fit values of the parameters.
chisqr : float
Chi-square: :math:`\chi^2 = \sum_i^N [{\\rm Resid}_i]^2`.
redchi : float
@@ -244,9 +225,17 @@ class MinimizerResult(object):
:math:`\chi^2_{\\nu}= {\chi^2} / {(N - N_{\\rm varys})}`.
aic : float
Akaike Information Criterion statistic.
+ :math:`N \ln(\chi^2/N) + 2 N_{\\rm varys}`
bic : float
Bayesian Information Criterion statistic.
+ :math:`N \ln(\chi^2/N) + \ln(N) N_{\\rm varys}`
+ flatchain : pandas.DataFrame
+ a flatchain view of the sampling chain from the `emcee` method.
+ Methods
+ ----------
+ show_candidates: pretty_print() representaiton of candidates from
+ `brute` method.
"""
def __init__(self, **kws):
@@ -292,7 +281,9 @@ def show_candidates(self, candidate_nmb='all'):
class Minimizer(object):
- """A general minimizer for curve fitting and optimization."""
+ """
+ A general minimizer for curve fitting and optimization.
+ """
_err_nonparam = ("params must be a minimizer.Parameters() instance or list "
"of Parameters()")
@@ -303,17 +294,16 @@ class Minimizer(object):
def __init__(self, userfcn, params, fcn_args=None, fcn_kws=None,
iter_cb=None, scale_covar=True, nan_policy='raise',
reduce_fcn=None, **kws):
- """The Minimizer class initialization.
-
- The following parameters are accepted:
-
+ """
Parameters
----------
userfcn : callable
Objective function that returns the residual (difference between
model and data) to be minimized in a least squares sense. The
- function must have the signature:
- `userfcn(params, *fcn_args, **fcn_kws)`
+ function must have the signature::
+
+ userfcn(params, *fcn_args, **fcn_kws)
+
params : :class:`lmfit.parameter.Parameters` object.
Contains the Parameters for the model.
fcn_args : tuple, optional
@@ -322,30 +312,40 @@ def __init__(self, userfcn, params, fcn_args=None, fcn_kws=None,
Keyword arguments to pass to `userfcn`.
iter_cb : callable, optional
Function to be called at each fit iteration. This function should
- have the signature:
- `iter_cb(params, iter, resid, *fcn_args, **fcn_kws)`,
+ have the signature::
+
+ iter_cb(params, iter, resid, *fcn_args, **fcn_kws)
+
where `params` will have the current parameter values, `iter`
the iteration, `resid` the current residual array, and `*fcn_args`
and `**fcn_kws` as passed to the objective function.
scale_covar : bool, optional
- Whether to automatically scale the covariance matrix (leastsq
- only).
+ Whether to automatically scale the covariance matrix (leastsq only).
nan_policy : str, optional
Specifies action if `userfcn` (or a Jacobian) returns nan
values. One of:
- - 'raise' - a `ValueError` is raised
- - 'propagate' - the values returned from `userfcn` are un-altered
- - 'omit' - the non-finite values are filtered.
+
+ - 'raise' : a `ValueError` is raised
+ - 'propagate' : the values returned from `userfcn` are un-altered
+ - 'omit' : non-finite values are filtered.
+
reduce_fcn : str or callable, optional
Function to convert a residual array to a scalar value for the scalar
minimizers. Optional values are (where `r` is the residual array):
- - None : sum of squares of residual [default]
- (r*r).sum()
- - 'negentropy' : neg entropy, using normal distribution
- (rho*log(rho)).sum() for rho=exp(-r*r/2)/(sqrt(2*pi))
- - 'neglogcauchy' : neg log likelihood, using Cauchy distribution
- -log(1/(pi*(1+r*r))).sum()
- - callable : must take 1 argument (r) and return a float.
+
+ - None : sum of squares of residual [default]
+
+ = (r*r).sum()
+
+ - 'negentropy' : neg entropy, using normal distribution
+
+ = rho*log(rho)).sum()` for rho=exp(-r*r/2)/(sqrt(2*pi))
+
+ - 'neglogcauchy': neg log likelihood, using Cauchy distribution
+
+ = -log(1/(pi*(1+r*r))).sum()
+
+ - callable : must take 1 argument (r) and return a float.
kws : dict, optional
Options to pass to the minimizer being used.
@@ -412,17 +412,17 @@ def __residual(self, fvars, apply_bounds_transformation=True):
function to calculate the residual.
Parameters
- ----------------
+ ----------
fvars : np.ndarray
Array of new parameter values suggested by the minimizer.
- apply_bounds_transformation : bool, optional
- If true, apply lmfits parameter transformation to constrain
+ apply_bounds_transformation : bool, optional, default=`True`
+ whether to apply lmfits parameter transformation to constrain
parameters. This is needed for solvers without inbuilt support for
bounds.
Returns
- -----------
- residuals : np.ndarray
+ -------
+ residual : np.ndarray
The evaluated function values for given fvars.
"""
@@ -605,11 +605,9 @@ def prepare_fit(self, params=None):
return result
def unprepare_fit(self):
- """Clean the fit state.
-
- AST compilations of constraint expressions are removed, so that
- subsequent fits will need to call prepare_fit.
+ """Clean fit state, so thatt subsequent fits will need to call prepare_fit().
+ removes AST compilations of constraint expressions.
"""
pass
@@ -703,21 +701,22 @@ def scalar_minimize(self, method='Nelder-Mead', params=None, **kws):
fmin_kws.pop('jac')
if method == 'differential_evolution':
- fmin_kws['method'] = _differential_evolution
- bounds = np.asarray([(par.min, par.max)
- for par in params.values()])
- varying = np.asarray([par.vary for par in params.values()])
-
- if not np.all(np.isfinite(bounds[varying])):
- raise ValueError('With differential evolution finite bounds '
- 'are required for each varying parameter')
- bounds = [(-np.pi / 2., np.pi / 2.)] * len(vars)
- fmin_kws['bounds'] = bounds
-
- # in scipy 0.14 this can be called directly from scipy_minimize
- # When minimum scipy is 0.14 the following line and the else
- # can be removed.
- ret = _differential_evolution(self.penalty, vars, **fmin_kws)
+ for par in params.values():
+ if (par.vary and
+ not (np.isfinite(par.min) and np.isfinite(par.max))):
+ raise ValueError('differential_evolution requires finite '
+ 'bound for all varying parameters')
+
+ _bounds = [(-np.pi / 2., np.pi / 2.)] * len(vars)
+ kwargs = dict(args=(), strategy='best1bin', maxiter=None,
+ popsize=15, tol=0.01, mutation=(0.5, 1),
+ recombination=0.7, seed=None, callback=None,
+ disp=False, polish=True, init='latinhypercube')
+
+ for k, v in fmin_kws.items():
+ if k in kwargs:
+ kwargs[k] = v
+ ret = differential_evolution(self.penalty, _bounds, **kwargs)
else:
ret = scipy_minimize(self.penalty, vars, **fmin_kws)
@@ -740,7 +739,7 @@ def scalar_minimize(self, method='Nelder-Mead', params=None, **kws):
result.chisqr = (result.chisqr**2).sum()
result.ndata = len(result.residual)
result.nfree = result.ndata - result.nvarys
- result.redchi = result.chisqr / result.nfree
+ result.redchi = result.chisqr / max(1, result.nfree)
# this is -2*loglikelihood
_neg2_log_likel = result.ndata * np.log(result.chisqr / result.ndata)
result.aic = _neg2_log_likel + 2 * result.nvarys
@@ -887,7 +886,7 @@ def emcee(self, params=None, steps=1000, nwalkers=100, burn=0, thin=1,
.. math::
- \ln p(D|F_{true}) = -\\frac{1}{2}\sum_n \left[\\frac{\left(g_n(F_{true}) - D_n \\right)^2}{s_n^2}+\ln (2\pi s_n^2)\\right]
+ \ln p(D|F_{true}) = -\frac{1}{2}\sum_n \left[\frac{(g_n(F_{true}) - D_n)^2}{s_n^2}+\ln (2\pi s_n^2)\right]
The first summand in the square brackets represents the residual for a
given datapoint (:math:`g` being the generative model, :math:`D_n` the
@@ -1410,7 +1409,7 @@ def brute(self, params=None, Ns=20, keep=50):
or all candidates when no number is specified.
- .. versionadded:: 0.96
+ .. versionadded:: 0.9.6
Notes
@@ -1519,10 +1518,10 @@ def minimize(self, method='leastsq', params=None, **kws):
Name of the fitting method to use. Valid values are:
- `'leastsq'`: Levenberg-Marquardt (default).
- Uses `scipy.optimize.leastsq`.
- - `'least_squares'`: Levenberg-Marquardt.
- Uses `scipy.optimize.least_squares`.
- - 'nelder': Nelder-Mead
+ - `'least_squares'`: Least-Squares minimization, using Trust Region Reflective method by default.
+ - `'differential_evolution'`: differential evolution
+ - `'brute'`: brute force method.
+ - '`nelder`': Nelder-Mead
- `'lbfgsb'`: L-BFGS-B
- `'powell'`: Powell
- `'cg'`: Conjugate-Gradient
@@ -1532,9 +1531,13 @@ def minimize(self, method='leastsq', params=None, **kws):
- `'trust-ncg'`: Trust Newton-CGn
- `'dogleg'`: Dogleg
- `'slsqp'`: Sequential Linear Squares Programming
- - `'differential_evolution'`: differential evolution
- - `'brute'`: brute force method.
- Uses `scipy.optimize.brute`.
+
+ In most cases, these methods wrap and use the method of the
+ same name from `scipy.optimize`, or uses
+ `scipy.optimize.minimize` with the same `method` argument.
+ Thus '`leastsq`' will use `scipy.optimize.leastsq`, while
+ '`powell`' will use `scipy.optimize.minimizer(....,
+ method='powell')`
For more details on the fitting methods please refer to the
`scipy docs `__.
@@ -1573,7 +1576,7 @@ def minimize(self, method='leastsq', params=None, **kws):
function = self.scalar_minimize
for key, val in SCALAR_METHODS.items():
if (key.lower().startswith(user_method) or
- val.lower().startswith(user_method)):
+ val.lower().startswith(user_method)):
kwargs['method'] = val
return function(**kwargs)
@@ -1779,7 +1782,7 @@ def _nan_policy(a, nan_policy='raise', handle_inf=True):
def minimize(fcn, params, method='leastsq', args=None, kws=None,
scale_covar=True, iter_cb=None, reduce_fcn=None, **fit_kws):
"""Perform a fit of a set of parameters by minimizing an objective (or
- "cost") function using one one of the several available methods.
+ cost) function using one one of the several available methods.
The minimize function takes a objective function to be minimized,
a dictionary (:class:`lmfit.parameter.Parameters`) containing the model
@@ -1801,22 +1804,25 @@ def minimize(fcn, params, method='leastsq', args=None, kws=None,
Name of the fitting method to use. Valid values are:
- `'leastsq'`: Levenberg-Marquardt (default).
- Uses `scipy.optimize.leastsq`.
- - `'least_squares'`: Levenberg-Marquardt.
- Uses `scipy.optimize.least_squares`.
- - 'nelder': Nelder-Mead
+ - `'least_squares'`: Least-Squares minimization, using Trust Region Reflective method by default.
+ - `'differential_evolution'`: differential evolution.
+ - `'brute'`: brute force method.
+ - '`nelder`': Nelder-Mead
- `'lbfgsb'`: L-BFGS-B
- `'powell'`: Powell
- `'cg'`: Conjugate-Gradient
- - `'newton'`: Newton-CG
+ - `'newton'`: Newton-Congugate-Gradient
- `'cobyla'`: Cobyla
- `'tnc'`: Truncate Newton
- - `'trust-ncg'`: Trust Newton-CGn
+ - `'trust-ncg'`: Trust Newton-Congugate-Gradient
- `'dogleg'`: Dogleg
- `'slsqp'`: Sequential Linear Squares Programming
- - `'differential_evolution'`: differential evolution
- - `'brute'`: brute force method.
- Uses `scipy.optimize.brute`.
+
+ In most cases, these methods wrap and use the method of the same
+ name from `scipy.optimize`, or uses `scipy.optimize.minimize` with
+ the same `method` argument. Thus '`leastsq`' will use
+ `scipy.optimize.leastsq`, while '`powell`' will use
+ `scipy.optimize.minimizer(...., method='powell')`
For more details on the fitting methods please refer to the
`scipy docs `__.
diff --git a/lmfit/model.py b/lmfit/model.py
index a80328758..113ad1f4a 100644
--- a/lmfit/model.py
+++ b/lmfit/model.py
@@ -53,42 +53,54 @@ def no_op(*args, **kwargs):
return no_op
-
class Model(object):
- """Create a model from a user-defined function.
-
- Parameters
- ----------
- func: function to be wrapped
- independent_vars: list of strings or None (default)
- arguments to func that are independent variables
- param_names: list of strings or None (default)
- names of arguments to func that are to be made into parameters
- missing: None, 'none', 'drop', or 'raise'
- 'none' or None: Do not check for null or missing values (default)
- 'drop': Drop null or missing observations in data.
- if pandas is installed, pandas.isnull is used, otherwise
- numpy.isnan is used.
- 'raise': Raise a (more helpful) exception when data contains null
- or missing values.
- name: None or string
- name for the model. When `None` (default) the name is the same as
- the model function (`func`).
-
- Note
- ----
- Parameter names are inferred from the function arguments,
- and a residual function is automatically constructed.
-
- Example
- -------
- >>> def decay(t, tau, N):
- ... return N*np.exp(-t/tau)
- ...
- >>> my_model = Model(decay, independent_vars=['t'])
+ """Create a model from a user-supplied model function that returns an
+ array of data to model some data as for a curve-fitting problem.
- """
+ The model function will normally take an independent variable (generally,
+ the first argument) and a series of arguments that are meant to be parameters
+ for the model. Thus, a simple peak using a Gaussian defined as
+
+ >>> import numpy as np
+ >>> def gaussian(x, amp, cen, wid):
+ ... return amp * np.exp(-(x-cen)**2 / wid)
+
+ can be turned into a Model with
+
+ >>> gmodel = Model(gaussian)
+
+ this will automatically discover the names of the independent variables and parameters:
+
+ >>> print(gmodel.param_names, gmodel.independent_vars)
+ ['amp', 'cen', 'wid'], ['x']
+
+ The :meth:`make_params` method will create a Parameters object for this model, optionally
+ taking initial values:
+
+ >>> params = gmodel.make_params(amp=1, cen=0, wid=1)
+
+ You can also use :meth:`set_param_hint` to set default attributes for the parameters
+ created, so that doing
+
+ >>> gmodel.set_param_hint('wid', min=0.0)
+
+ would set a bound on the 'wid' Parameter created by :meth:`make_params`.
+
+ A model and corresponding Parameters can be used to evaluate the model for a given
+ input array (or single value) of independent variables with:
+
+ >>> xdata = np.linspace(-1, 1, 101)
+ >>> yinit = gmodel.eval(params, x=xdata)
+
+ and can be used to fit the Parameters to match an array of data:
+
+ >>> result = gmodel.fit(data=ydata, params, x=xdata)
+
+ which will return a `ModelResult` object.
+ Models can be combined into a `CompositeModel` by adding (or
+ multiplying, subtracting, or dividing) two or models.
+ """
_forbidden_args = ('data', 'weights', 'params')
_invalid_ivar = "Invalid independent variable name ('%s') for function %s"
_invalid_par = "Invalid parameter name ('%s') for function %s"
@@ -100,7 +112,46 @@ class Model(object):
def __init__(self, func, independent_vars=None, param_names=None,
missing='none', prefix='', name=None, **kws):
- """TODO: docstring in public method."""
+ """
+ Parameters
+ ----------
+ func: function to be wrapped
+ independent_vars: list of strings or ``None`` (default)
+ arguments to func that are independent variables
+ param_names: list of strings or ``None`` (default)
+ names of arguments to func that are to be made into parameters
+ missing: ``None`` or string
+ how to handle `nan` and missing values in data. One of:
+
+ - 'none' or ``None``: Do not check for null or missing values (default)
+
+ - 'drop': Drop null or missing observations in data. if pandas is
+ installed, `pandas.isnull` is used, otherwise `numpy.isnan` is used.
+
+ - 'raise': Raise a (more helpful) exception when data contains null
+ or missing values.
+
+ name: ``None`` or string
+ name for the model. When ``None`` (default) the name is the same as
+ the model function (`func`).
+ kws: optional dict
+ additional keyword arguments to pass to model function.
+
+ Notes
+ -----
+ 1. Parameter names are inferred from the function arguments,
+ and a residual function is automatically constructed.
+
+ 2. The model function must return an array that will be the same
+ size as the data being modeled.
+
+ Example
+ -------
+ >>> def decay(t, tau, N):
+ ... return N*np.exp(-t/tau)
+ ...
+ >>> my_model = Model(decay, independent_vars=['t'])
+ """
self.func = func
self._prefix = prefix
self._param_root_names = param_names # will not include prefixes
@@ -135,7 +186,7 @@ def _reprstring(self, long=False):
@property
def name(self):
- """TODO: add method docstring."""
+ """return Model name"""
return self._reprstring(long=False)
@name.setter
@@ -144,7 +195,7 @@ def name(self, value):
@property
def prefix(self):
- """TODO: add method docstring."""
+ """return Model prefix"""
return self._prefix
@property
@@ -237,15 +288,34 @@ def _parse_params(self):
self._param_names = names[:]
def set_param_hint(self, name, **kwargs):
- """Set hints for parameter.
+ """Set *hints* to use when creating parameters with `make_params()`
+ for the named parameter. This is especially convenient for setting initial
+ values. The ``name`` can include the models ``prefix`` or not.
- Including optional bounds and constraints
- (value, vary, min, max, expr) these will be used by make_params()
+ The hint given can also include optional bounds and constraints
+ (value, vary, min, max, expr) which will be used by make_params()
when building default parameters.
- example:
- model = GaussianModel()
- model.set_param_hint('amplitude', min=-100.0, max=0.)
+ Parameters
+ ----------
+ name : string
+ Parameter name
+ value : float or ``None`` (default)
+ value for parameter.
+ min : float or ``-np.inf`` (default)
+ lower bound for parameter value.
+ max : float or ``np.inf`` (default)
+ upper bound for parameter value.
+ vary : bool (default ``True``)
+ whether to vary of fix parameter value
+ expr : string or ``None`` (default)
+ expression to use to constrain parameter value.
+
+ Example
+ --------
+
+ >>> model = GaussianModel()
+ >>> model.set_param_hint('sigma', min=0)
"""
npref = len(self._prefix)
@@ -281,10 +351,25 @@ def print_param_hints(self, colwidth=8):
print(line.format(name_len=name_len, n=colwidth, **pvalues))
def make_params(self, verbose=False, **kwargs):
- """Create and return a Parameters object for a Model.
+ """Create a Parameters object for a Model.
+
+ Parameters
+ ----------
+ kwargs : optional initial values
+ parameter names and initial values
- This applies any default values
+ verbose : bool (default ``False``):
+ whether to print out messages
+ Returns
+ ---------
+ params : Parameters
+
+ Notes
+ -----
+ 1. The parameters may or may not have decent initial values for each
+ parameter.
+ 2. This applies any default values or parameter hints that may have been set.
"""
params = Parameters()
@@ -346,15 +431,30 @@ def make_params(self, verbose=False, **kwargs):
p._delay_asteval = False
return params
- def guess(self, data=None, **kws):
- """Stub for guess starting values.
+ def guess(self, data, **kws):
+ """Guess starting values for the parameters of a model.
+ This is not implemented for all models, but is available
+ for many of the built-in models.
+
+ Parameters
+ ----------
+ data : array-like
+ array of data to use to guess parameter values.
+ kws : additional keyword arguments, passed to model function.
+
+ Returns
+ -------
+ params : Parameters
- Note
- ----
+ Notes
+ -----
Should be implemented for each model subclass to run
self.make_params(), update starting values and return a
Parameters object.
+ Raises
+ ------
+ NotImplementedError
"""
cname = self.__class__.__name__
msg = 'guess() not implemented for %s' % cname
@@ -445,7 +545,31 @@ def _make_all_args(self, params=None, **kwargs):
return args
def eval(self, params=None, **kwargs):
- """Evaluate the model with the supplied parameters."""
+ """Evaluate the model with the supplied parameters and keyword arguments
+
+ Parameters
+ -----------
+ params : Parameters or ``None``
+ parameters to use in Model
+ kwargs : optional
+ keyword arguments to pass to model function.
+
+ Returns
+ -------
+ val : ndarray
+ value of model given the parameters and other arguments.
+
+ Notes
+ -----
+ 1. if `params` is ``None``, the values for all parameters are
+ expected to be provided as keyword arguments. If `params` is
+ given, and a keyword argument for a parameter value is also given,
+ the keyword argument will be used.
+
+ 2. all non-parameter arguments for the model function, **including
+ all the independent variables** will need to be passed in using
+ keyword arguments.
+ """
return self.func(**self.make_funcargs(params, kwargs))
@property
@@ -456,8 +580,17 @@ def components(self):
def eval_components(self, params=None, **kwargs):
"""Evaluate the model with the supplied parameters.
- Returns a ordered dict containting name, result pairs.
+ Parameters
+ -----------
+ params : Parameters or ``None``
+ parameters to use in Model
+ kwargs : optional
+ keyword arguments to pass to model function.
+ Returns:
+ --------
+ comps : OrderedDict
+ keys are prefixes for component model, values are value of each component.
"""
key = self._prefix
if len(key) < 1:
@@ -467,23 +600,29 @@ def eval_components(self, params=None, **kwargs):
def fit(self, data, params=None, weights=None, method='leastsq',
iter_cb=None, scale_covar=True, verbose=False, fit_kws=None,
**kwargs):
- """Fit the model to the data.
+ """Fit the model to the data using the supplied Parameters
Parameters
----------
data: array-like
- params: Parameters object
- weights: array-like of same size as data
- used for weighted fit
- method: fitting method to use (default = 'leastsq')
- iter_cb: None or callable callback function to call at each iteration.
- scale_covar: bool (default True) whether to auto-scale covariance matrix
- verbose: bool (default True) print a message when a new parameter is
- added because of a hint.
- fit_kws: dict
- default fitting options, such as xtol and maxfev, for scipy optimizer
- keyword arguments: optional, named like the arguments of the
- model function, will override params. See examples below.
+ array of data to be fig.
+ params: Parameters or ``None`` (default)
+ parameters to use in fit.
+ weights: array-like of same size as data or ``None`` (default)
+ weights to use for the calculation of the fit residual.
+ method: string (default = 'leastsq')
+ name of fitting method to use
+ iter_cb: callable or ``None`` (default)
+ callback function to call at each iteration.
+ scale_covar: bool (default ``True``)
+ whether to automatically scale the covariance matrix when calculating
+ uncertainties. `leastsq` method only.
+ verbose: bool (default ``True``)
+ whether to print a message when a new parameter is added because of a hint.
+ fit_kws: dict or ``None`` (default)
+ default fitting options, such as xtol and maxfev, for scipy optimizer
+ kwargs: optional
+ arguments to pass to the model function, possibly overriding params
Returns
-------
@@ -491,23 +630,33 @@ def fit(self, data, params=None, weights=None, method='leastsq',
Examples
--------
- # Take t to be the independent variable and data to be the
- # curve we will fit.
+ Take t to be the independent variable and data to be the curve we will fit.
+ Use keyword arguments to set initial guesses:
- # Using keyword arguments to set initial guesses
>>> result = my_model.fit(data, tau=5, N=3, t=t)
- # Or, for more control, pass a Parameters object.
+ Or, for more control, pass a Parameters object.
+
>>> result = my_model.fit(data, params, t=t)
- # Keyword arguments override Parameters.
- >>> result = my_model.fit(data, params, tau=5, t=t)
+ Keyword arguments override Parameters.
- Note
- ----
- All parameters, however passed, are copied on input, so the original
- Parameter objects are unchanged.
+ >>> result = my_model.fit(data, params, tau=5, t=t)
+ Notes
+ -----
+ 1. if `params` is ``None``, the values for all parameters are
+ expected to be provided as keyword arguments. If `params` is
+ given, and a keyword argument for a parameter value is also given,
+ the keyword argument will be used.
+
+ 2. all non-parameter arguments for the model function, **including
+ all the independent variables** will need to be passed in using
+ keyword arguments.
+
+ 3. Parameters (however passed in), are copied on input, so the
+ original Parameter objects are unchanged, and the updated values
+ are in the returned `ModelResult`.
"""
if params is None:
params = self.make_params(verbose=verbose)
@@ -603,31 +752,16 @@ def __truediv__(self, other):
class CompositeModel(Model):
- """Create a composite model -- a binary operator of two Models.
-
- Parameters
- ----------
- left_model: left-hand side model-- must be a Model()
- right_model: right-hand side model -- must be a Model()
- oper: callable binary operator (typically, operator.add, operator.mul, etc)
-
- independent_vars: list of strings or None (default)
- arguments to func that are independent variables
- param_names: list of strings or None (default)
- names of arguments to func that are to be made into parameters
- missing: None, 'none', 'drop', or 'raise'
- 'none' or None: Do not check for null or missing values (default)
- 'drop': Drop null or missing observations in data.
- if pandas is installed, pandas.isnull is used, otherwise
- numpy.isnan is used.
- 'raise': Raise a (more helpful) exception when data contains null
- or missing values.
- name: None or string
- name for the model. When `None` (default) the name is the same as
- the model function (`func`).
+ """A composite model combines two models (`left` and `right`) with a
+ binary operator (`op`).
- """
+ Normally, one does not have to explicitly create a `CompositeModel`,
+ but can use normal Python operators `+`, '-', `*`, and `/` to combine
+ components as doing::
+
+ >>> mod = Model(fcn1) + Model(fcn2) * Model(fcn3)
+ """
_names_collide = ("\nTwo models have parameters named '{clash}'. "
"Use distinct names.")
_bad_arg = "CompositeModel: argument {arg} is not a Model"
@@ -636,7 +770,23 @@ class CompositeModel(Model):
operator.mul: '*', operator.truediv: '/'}
def __init__(self, left, right, op, **kws):
- """TODO: docstring in public method."""
+ """
+ Parameters
+ ----------
+ left : `Model` instance
+ left-hand model
+ right : `Model` instance
+ right-hand model
+ op : callable binary operator
+ operator to combine `left` and `right` models.
+ kwargs : optional
+ additional keywords are passed to `Model` when creating this
+ new model.
+
+ Notes
+ -----
+ 1. The two models must use the same independent variable.
+ """
if not isinstance(left, Model):
raise ValueError(self._bad_arg.format(arg=left))
if not isinstance(right, Model):
@@ -712,65 +862,39 @@ def _make_all_args(self, params=None, **kwargs):
out.update(self.left._make_all_args(params=params, **kwargs))
return out
-
class ModelResult(Minimizer):
"""Result from Model fit.
-
- Attributes
- -----------
- model instance of Model -- the model function
- params instance of Parameters -- the fit parameters
- data array of data values to compare to model
- weights array of weights used in fitting
- init_params copy of params, before being updated by fit()
- init_values array of parameter values, before being updated by fit()
- init_fit model evaluated with init_params.
- best_fit model evaluated with params after being updated by fit()
-
- Methods:
- --------
- fit(data=None, params=None, weights=None, method=None, **kwargs)
- fit (or re-fit) model with params to data (with weights)
- using supplied method. The keyword arguments are sent to
- as keyword arguments to the model function.
-
- all inputs are optional, defaulting to the value used in
- the previous fit. This allows easily changing data or
- parameter settings, or both.
-
- eval(params=None, **kwargs)
- evaluate the current model, with parameters (defaults to the current
- parameter values), with values in kwargs sent to the model function.
-
- eval_components(params=Nones, **kwargs)
- evaluate the current model, with parameters (defaults to the current
- parameter values), with values in kwargs sent to the model function
- and returns an ordered dict with the model names as the key and the
- component results as the values.
-
- fit_report(modelpars=None, show_correl=True, min_correl=0.1)
- return a fit report.
-
- plot_fit(self, ax=None, datafmt='o', fitfmt='-', initfmt='--', xlabel = None, ylabel=None,
- numpoints=None, data_kws=None, fit_kws=None, init_kws=None,
- ax_kws=None)
- Plot the fit results using matplotlib.
-
- plot_residuals(self, ax=None, datafmt='o', data_kws=None, fit_kws=None,
- ax_kws=None)
- Plot the fit residuals using matplotlib.
-
- plot(self, datafmt='o', fitfmt='-', initfmt='--', xlabel=None, ylabel=None, numpoints=None,
- data_kws=None, fit_kws=None, init_kws=None, ax_res_kws=None,
- ax_fit_kws=None, fig_kws=None)
- Plot the fit results and residuals using matplotlib.
-
+ This has many attributes and methods for viewing and working with the
+ results of a fit using Model. It inherits from Minimizer, so that it
+ can be used to modify and re-run the fit for the Model.
"""
-
def __init__(self, model, params, data=None, weights=None,
method='leastsq', fcn_args=None, fcn_kws=None,
iter_cb=None, scale_covar=True, **fit_kws):
- """TODO: docstring in public method."""
+ """
+ Parameters
+ ----------
+ model : Model instance
+ model to use.
+ params : Parameters instance
+ parameters with initial values for model.
+ data : array-like or ``None``
+ data to be modeled.
+ weights : array-like or ``None``
+ weights to multiply (data-model) for fit residual.
+ method : string
+ name of minimization method to use
+ fcn_args : sequence or ``None``
+ positional arguments to send to model function
+ fcn_dict : dict or ``None``
+ keyword arguments to send to model function
+ iter_cb : callable or ``None``
+ function to call on each iteration of fit.
+ scale_covar : bool
+ whether to scale covariance matrix for uncertainty evaluation
+ fit_kws : optional
+ keyword arguments to send to minimization routine.
+ """
self.model = model
self.data = data
self.weights = weights
@@ -782,7 +906,21 @@ def __init__(self, model, params, data=None, weights=None,
scale_covar=scale_covar, **fit_kws)
def fit(self, data=None, params=None, weights=None, method=None, **kwargs):
- """Perform fit for a Model, given data and params."""
+ """Re-perform fit for a Model, given data and params.
+
+ Parameters
+ ----------
+ data : array-like or ``None``
+ data to be modeled.
+ params : Parameters instance
+ parameters with initial values for model.
+ weights : array-like or ``None``
+ weights to multiply (data-model) for fit residual.
+ method : string
+ name of minimization method to use
+ kwargs : optional
+ keyword arguments to send to minimization routine.
+ """
if data is not None:
self.data = data
if params is not None:
@@ -812,12 +950,17 @@ def fit(self, data=None, params=None, weights=None, method=None, **kwargs):
def eval(self, params=None, **kwargs):
"""Evaluate model function.
- Arguments: params (Parameters): parameters,
- defaults to ModelResult .params kwargs (variable): values of options,
- independent variables, etc.
-
- Returns: ndarray or float for evaluated model
+ Parameters
+ ----------
+ params : Parameters or ``None`` (default)
+ Parameters to use.
+ kwargs : optional
+ options to send Model.eval()
+ Returns
+ -------
+ out : ndarray
+ array for evaluated model
"""
self.userkws.update(kwargs)
if params is None:
@@ -827,12 +970,18 @@ def eval(self, params=None, **kwargs):
def eval_components(self, params=None, **kwargs):
"""Evaluate each component of a composite model function.
- Arguments:
- params (Parameters): parameters, defaults to ModelResult .params
- kwargs (variable): values of options, independent variables, etc.
+ Parameters
+ ----------
+ params : Parameters or ``None``
+ parameters, defaults to ModelResult.params
+ kwargs : optional
+ keyword arguments to pass to model function.
- Returns: ordered dictionary with keys of prefixes, and
- values of values for each component of the model.
+ Returns
+ -------
+ comps : ordered dictionary
+ keys are prefixes of component models, and values are
+ the estimated model value for each component of the model.
"""
self.userkws.update(kwargs)
@@ -841,38 +990,44 @@ def eval_components(self, params=None, **kwargs):
return self.model.eval_components(params=params, **self.userkws)
def eval_uncertainty(self, params=None, sigma=1, **kwargs):
- """Evaluate the uncertainty of the *model function*.
+ """Evaluate the uncertainty of the *model function* from the
+ uncertainties for the best-fit parameters. This can be used
+ to give confidence bands for the model.
+
+ Parameters
+ ----------
+ params : Parameters or ``None``
+ parameters, defaults to ModelResult .params
+ sigma : float
+ confidence level, i.e. how many sigma [default=1]
+ kwargs : optional
+ values of options, independent variables, etc
- The uncertainty is evaluated from the uncertainties for the
- best-fit parameters. This can be used to give confidence bands
- for the model.
+ Returns
+ -------
+ out : ndarray
+ uncertainty at each value of the model.
- Arguments:
- params (Parameters): parameters, defaults to ModelResult .params
- sigma (float): confidence level, i.e. how many sigma [default=1]
- kwargs (variable): values of options, independent variables, etc
+ Example
+ -------
- Returns:
- ndarray for the uncertainty at each value of the model.
-
- Example:
- out = model.fit(data, params, x=x)
- dely = out.eval_confidence_band(x=x)
- plt.plot(x, data)
- plt.plot(x, out.best_fit)
- plt.fill_between(x, out.best_fit-dely,
- out.best_fit+dely, color='#888888')
-
- Notes:
- 1. This is based on the excellent and clear example from
- https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-prediction-intervals
- which references the original work of
- J. Wolberg,Data Analysis Using the Method of Least Squares, 2006, Springer
- 2. the value of sigma is number of `sigma` values, and is converted to a probability.
- Values or 1, 2, or 3 give probalities of 0.6827, 0.9545, and 0.9973, respectively.
- If the sigma value is < 1, it is interpreted as the probability itself. That is,
- `sigma=1` and `sigma=0.6827` will give the same results, within precision errors.
+ >>> out = model.fit(data, params, x=x)
+ >>> dely = out.eval_confidence_band(x=x)
+ >>> plt.plot(x, data)
+ >>> plt.plot(x, out.best_fit)
+ >>> plt.fill_between(x, out.best_fit-dely,
+ ... out.best_fit+dely, color='#888888')
+ Notes
+ -----
+ 1. This is based on the excellent and clear example from
+ https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-prediction-intervals which references the original work of
+ J. Wolberg,Data Analysis Using the Method of Least Squares, 2006, Springer
+ 2. the value of sigma is number of `sigma` values, and is converted to a
+ probability. Values or 1, 2, or 3 give probalities of 0.6827, 0.9545,
+ and 0.9973, respectively. If the sigma value is < 1, it is interpreted
+ as the probability itself. That is, `sigma=1` and `sigma=0.6827` will
+ give the same results, within precision errors.
"""
self.userkws.update(kwargs)
if params is None:
@@ -911,30 +1066,82 @@ def eval_uncertainty(self, params=None, sigma=1, **kwargs):
return np.sqrt(df2*self.redchi) * t.ppf((prob+1)/2.0, ndata-nvarys)
def conf_interval(self, **kwargs):
- """Return explicitly calculated confidence intervals."""
+ """Calculate the confidence intervals for the variable parameters
+ using :func:`confidence.conf_interval()`. keyword arguments are
+ passed to that function. The result is stored in :attr:`ci_out`,
+ and so can be accessed without recalculating them.
+ """
if self.ci_out is None:
self.ci_out = conf_interval(self, self, **kwargs)
return self.ci_out
def ci_report(self, with_offset=True, ndigits=5, **kwargs):
- """Return nicely formatted report about confidence intervals."""
+ """Return a nicely formatted text report of the confidence
+ intervals, as from :func:`ci_report()`.
+
+ Parameters
+ ----------
+ with_offset : bool (default `True`)
+ Whether to subtract best value from all other values.
+ ndigits : int (default 5)
+ Number of significant digits to show.
+
+ Returns
+ -------
+ Text of formatted report on confidence intervals.
+
+ """
return ci_report(self.conf_interval(**kwargs),
with_offset=with_offset, ndigits=ndigits)
- def fit_report(self, **kwargs):
- """Return fit report."""
- return '[[Model]]\n %s\n%s\n' % (self.model._reprstring(long=True),
- fit_report(self, **kwargs))
+ def fit_report(self, modelpars=None, show_correl=True,
+ min_correl=0.1, sort_pars=False):
+ """Return a printable fit report for the fit with fit statistics,
+ best-fit values with uncertainties and correlations.
+
+
+ Parameters
+ ----------
+ inpars : Parameters
+ input Parameters from fit or MinimizerResult returned from a fit.
+ modelpars : optional
+ known Model Parameters
+ show_correl : bool, default ``True``
+ whether to show list of sorted correlations
+ min_correl : float, default 0.1
+ smallest correlation absolute value to show.
+ sort_pars : bool, default ``False``, or callable
+ whether to show parameter names sorted in alphanumerical order. If
+ ``False``, then the parameters will be listed in the order they were
+ added to the Parameters dictionary. If callable, then this (one
+ argument) function is used to extract a comparison key from each
+ list element.
+
+ Returns
+ -------
+ text : string
+ multi-line text of fit report
+
+ See Also
+ --------
+ :func:`fit_report()`
+ """
+ report = fit_report(self.params, modelpars=modelpars,
+ show_correl=show_correl,
+ min_correl=min_correl, sort_pars=sort_pars)
+ modname = self.model._reprstring(long=True)
+ return '[[Model]]\n %s\n%s\n' % (modname, report)
+
@_ensureMatplotlib
def plot_fit(self, ax=None, datafmt='o', fitfmt='-', initfmt='--',
xlabel=None, ylabel=None, yerr=None, numpoints=None,
data_kws=None, fit_kws=None, init_kws=None, ax_kws=None):
- """Plot the fit results using matplotlib.
+ """Plot the fit results using matplotlib, if available.
+ The plot will include the data points, the initial fit curve, and
+ the best-fit curve. If the fit model included weights or if ``yerr``
+ is specified, errorbars will also be plotted.
- The method will plot results of the fit using matplotlib, including:
- the data points, the initial fit curve and the fitted curve. If the fit
- model included weights, errorbars will also be plotted.
Parameters
----------
@@ -972,7 +1179,7 @@ def plot_fit(self, ax=None, datafmt='o', fitfmt='-', initfmt='--',
matplotlib.axes.Axes
Notes
- ----
+ -----
For details about plot format strings and keyword arguments see
documentation of matplotlib.axes.Axes.plot.
@@ -980,7 +1187,7 @@ def plot_fit(self, ax=None, datafmt='o', fitfmt='-', initfmt='--',
matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is
not specified and the fit includes weights, yerr set to 1/self.weights
- If `ax` is None then matplotlib.pyplot.gca(**ax_kws) is called.
+ If `ax` is None then `matplotlib.pyplot.gca(**ax_kws)` is called.
See Also
--------
@@ -1048,11 +1255,9 @@ def plot_fit(self, ax=None, datafmt='o', fitfmt='-', initfmt='--',
@_ensureMatplotlib
def plot_residuals(self, ax=None, datafmt='o', yerr=None, data_kws=None,
fit_kws=None, ax_kws=None):
- """Plot the fit residuals using matplotlib.
-
- The method will plot residuals of the fit using matplotlib, including:
- the data points and the fitted curve (as horizontal line). If the fit
- model included weights, errorbars will also be plotted.
+ """Plot the fit residuals using matplotlib, if available. If ``yerr``
+ is supplied or if the model included weights, errorbars will also
+ be plotted.
Parameters
----------
@@ -1075,7 +1280,7 @@ def plot_residuals(self, ax=None, datafmt='o', yerr=None, data_kws=None,
matplotlib.axes.Axes
Notes
- ----
+ -----
For details about plot format strings and keyword arguments see
documentation of matplotlib.axes.Axes.plot.
@@ -1083,7 +1288,7 @@ def plot_residuals(self, ax=None, datafmt='o', yerr=None, data_kws=None,
matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is
not specified and the fit includes weights, yerr set to 1/self.weights
- If `ax` is None then matplotlib.pyplot.gca(**ax_kws) is called.
+ If `ax` is None then `matplotlib.pyplot.gca(**ax_kws)` is called.
See Also
--------
@@ -1131,8 +1336,7 @@ def plot(self, datafmt='o', fitfmt='-', initfmt='--', xlabel=None,
ylabel=None, yerr=None, numpoints=None, fig=None, data_kws=None,
fit_kws=None, init_kws=None, ax_res_kws=None, ax_fit_kws=None,
fig_kws=None):
- """Plot the fit results and residuals using matplotlib.
-
+ """Plot the fit results and residuals using matplotlib, if available.
The method will produce a matplotlib figure with both results of the
fit and the residuals plotted. If the fit model included weights,
errorbars will also be plotted.
@@ -1177,14 +1381,14 @@ def plot(self, datafmt='o', fitfmt='-', initfmt='--', xlabel=None,
A tuple with matplotlib's Figure and GridSpec objects.
Notes
- ----
+ -----
The method combines ModelResult.plot_fit and ModelResult.plot_residuals.
If yerr is specified or if the fit model included weights, then
matplotlib.axes.Axes.errorbar is used to plot the data. If yerr is
not specified and the fit includes weights, yerr set to 1/self.weights
- If `fig` is None then matplotlib.pyplot.figure(**fig_kws) is called,
+ If `fig` is None then `matplotlib.pyplot.figure(**fig_kws)` is called,
otherwise `fig_kws` is ignored.
See Also
diff --git a/lmfit/models.py b/lmfit/models.py
index 2d1ec9441..e6b3dc646 100644
--- a/lmfit/models.py
+++ b/lmfit/models.py
@@ -11,26 +11,21 @@
skewed_voigt, step, students_t, voigt)
from .model import Model
-
class DimensionalError(Exception):
"""TODO: class docstring."""
-
pass
-
def _validate_1d(independent_vars):
if len(independent_vars) != 1:
raise DimensionalError(
"This model requires exactly one independent variable.")
-
def index_of(arr, val):
"""Return index of array nearest to a value."""
if val < min(arr):
return 0
return np.abs(arr-val).argmin()
-
def fwhm_expr(model):
"""Return constraint expression for fwhm."""
fmt = "{factor:.7f}*{prefix:s}sigma"
@@ -79,98 +74,164 @@ def update_param_vals(pars, prefix, **kwargs):
return pars
-COMMON_DOC = """
-
-Parameters
-----------
-independent_vars: list of strings to be set as variable names
-missing: None, 'drop', or 'raise'
- None: Do not check for null or missing values.
- 'drop': Drop null or missing observations in data.
- Use pandas.isnull if pandas is available; otherwise,
- silently fall back to numpy.isnan.
- 'raise': Raise a (more helpful) exception when data contains null
- or missing values.
-prefix: string to prepend to paramter names, needed to add two Models that
- have parameter names in common. None by default.
+COMMON_INIT_DOC = """
+ Parameters
+ ----------
+ independent_vars: ['x']
+ arguments to func that are independent variables
+ prefix: string or ``None``
+ string to prepend to paramter names, needed to add two Models that
+ have parameter names in common.
+ missing: string or ``None``
+ how to handle `nan` and missing values in data. One of:
+
+ - 'none' or ``None``: Do not check for null or missing values (default)
+
+ - 'drop': Drop null or missing observations in data. if pandas is
+ installed, `pandas.isnull` is used, otherwise `numpy.isnan` is used.
+
+ - 'raise': Raise a (more helpful) exception when data contains nullz
+ or missing values.
+ kwargs : optional
+ keyword arguments to pass to :class:`Model`.
"""
+COMMON_GUESS_DOC = """Guess starting values for the parameters of a model.
+
+ Parameters
+ ----------
+ data : array-like
+ array of data to use to guess parameter values.
+ kws : additional keyword arguments, passed to model function.
+
+ Returns
+ -------
+ params : Parameters
+"""
+
+COMMON_DOC = COMMON_INIT_DOC
class ConstantModel(Model):
- __doc__ = "x -> c" + COMMON_DOC
+ """Constant model, with a single Parameter: ``c``
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
+ Note that this is 'constant' in the sense of having no dependence on
+ the independent variable ``x``, not in the sense of being non-varying.
+ To be clear, ``c`` will be a Parameter that will be varied in the
+ fit (by default, of course).
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
def constant(x, c):
return c
- super(ConstantModel, self).__init__(constant, *args, **kwargs)
+ super(ConstantModel, self).__init__(constant, **kwargs)
def guess(self, data, **kwargs):
- """TODO: docstring in public method."""
pars = self.make_params()
pars['%sc' % self.prefix].set(value=data.mean())
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
class ComplexConstantModel(Model):
- __doc__ = "x -> re+1j*im" + COMMON_DOC
+ """Complex constant model, with wo Parameters:
+ ``re``, and ``im``.
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
+ Note that ``re`` and ``im`` are 'constant' in the sense of having no
+ dependence on the independent variable ``x``, not in the sense of
+ being non-varying. To be clear, ``re`` and ``im`` will be Parameters
+ that will be varied in the fit (by default, of course).
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
def constant(x, re, im):
return re + 1j*im
- super(ComplexConstantModel, self).__init__(constant, *args, **kwargs)
+ super(ComplexConstantModel, self).__init__(constant, **kwargs)
def guess(self, data, **kwargs):
- """TODO: docstring in public method."""
pars = self.make_params()
pars['%sre' % self.prefix].set(value=data.real.mean())
pars['%sim' % self.prefix].set(value=data.imag.mean())
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
class LinearModel(Model):
- __doc__ = linear.__doc__ + COMMON_DOC if linear.__doc__ else ""
+ """Linear model, with two Parameters
+ ``intercept`` and ``slope``, defined as
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(LinearModel, self).__init__(linear, *args, **kwargs)
+ .. math::
+
+ f(x; m, b) = m x + b
+
+ with ``slope`` for :math:`m` and ``intercept`` for :math:`b`.
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(LinearModel, self).__init__(linear, **kwargs)
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
sval, oval = 0., 0.
if x is not None:
sval, oval = np.polyfit(x, data, 1)
pars = self.make_params(intercept=oval, slope=sval)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class QuadraticModel(Model):
- __doc__ = parabolic.__doc__ + COMMON_DOC if parabolic.__doc__ else ""
+ """A quadratic model, with three Parameters
+ ``a``, ``b``, and ``c``, defined as
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(QuadraticModel, self).__init__(parabolic, *args, **kwargs)
+ .. math::
+
+ f(x; a, b, c) = a x^2 + b x + c
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(QuadraticModel, self).__init__(parabolic, **kwargs)
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
a, b, c = 0., 0., 0.
if x is not None:
a, b, c = np.polyfit(x, data, 2)
pars = self.make_params(a=a, b=b, c=c)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
ParabolicModel = QuadraticModel
class PolynomialModel(Model):
- __doc__ = "x -> c0 + c1 * x + c2 * x**2 + ... c7 * x**7" + COMMON_DOC
+ """A polynomial model with up to 7 Parameters, specfied by ``degree``.
+
+ .. math::
+
+ f(x; c_0, c_1, \ldots, c_7) = \sum_{i=0, 7} c_i x^i
+
+ with parameters ``c0``, ``c1``, ..., ``c7``. The supplied ``degree``
+ will specify how many of these are actual variable parameters. This
+ uses :numpydoc:`polyval` for its calculation of the polynomial.
+ """
MAX_DEGREE = 7
DEGREE_ERR = "degree must be an integer less than %d."
-
- def __init__(self, degree, *args, **kwargs):
- """TODO: docstring in public method."""
+ def __init__(self, degree, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
if not isinstance(degree, int) or degree > self.MAX_DEGREE:
raise TypeError(self.DEGREE_ERR % self.MAX_DEGREE)
@@ -181,10 +242,9 @@ def __init__(self, degree, *args, **kwargs):
def polynomial(x, c0=0, c1=0, c2=0, c3=0, c4=0, c5=0, c6=0, c7=0):
return np.polyval([c7, c6, c5, c4, c3, c2, c1, c0], x)
- super(PolynomialModel, self).__init__(polynomial, *args, **kwargs)
+ super(PolynomialModel, self).__init__(polynomial, **kwargs)
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
pars = self.make_params()
if x is not None:
out = np.polyfit(x, data, self.poly_degree)
@@ -192,232 +252,468 @@ def guess(self, data, x=None, **kwargs):
pars['%sc%i' % (self.prefix, i)].set(value=coef)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
class GaussianModel(Model):
- __doc__ = gaussian.__doc__ + COMMON_DOC if gaussian.__doc__ else ""
+ r"""A model based on a Gaussian or normal distribution lineshape
+ (see http://en.wikipedia.org/wiki/Normal_distribution), with three Parameters:
+ ``amplitude``, ``center``, and ``sigma``.
+ In addition, parameters ``fwhm`` and ``height`` are included as constraints
+ to report full width at half maximum and maximum peak height, respectively.
+
+ .. math::
+
+ f(x; A, \mu, \sigma) = \frac{A}{\sigma\sqrt{2\pi}} e^{[{-{(x-\mu)^2}/{{2\sigma}^2}}]}
+
+ where the parameter ``amplitude`` corresponds to :math:`A`, ``center`` to
+ :math:`\mu`, and ``sigma`` to :math:`\sigma`. The full width at
+ half maximum is :math:`2\sigma\sqrt{2\ln{2}}`, approximately
+ :math:`2.3548\sigma`.
+ """
fwhm_factor = 2.354820
height_factor = 1./np.sqrt(2*np.pi)
-
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(GaussianModel, self).__init__(gaussian, *args, **kwargs)
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(GaussianModel, self).__init__(gaussian, **kwargs)
self.set_param_hint('sigma', min=0)
self.set_param_hint('fwhm', expr=fwhm_expr(self))
self.set_param_hint('height', expr=height_expr(self))
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
class LorentzianModel(Model):
- __doc__ = lorentzian.__doc__ + COMMON_DOC if lorentzian.__doc__ else ""
+ r"""A model based on a Lorentzian or Cauchy-Lorentz distribution function
+ (see http://en.wikipedia.org/wiki/Cauchy_distribution), with three Parameters:
+ ``amplitude``, ``center``, and ``sigma``.
+ In addition, parameters ``fwhm`` and ``height`` are included as constraints
+ to report full width at half maximum and maximum peak height, respectively.
+
+ .. math::
+
+ f(x; A, \mu, \sigma) = \frac{A}{\pi} \big[\frac{\sigma}{(x - \mu)^2 + \sigma^2}\big]
+
+ where the parameter ``amplitude`` corresponds to :math:`A`, ``center`` to
+ :math:`\mu`, and ``sigma`` to :math:`\sigma`. The full width at
+ half maximum is :math:`2\sigma`.
+ """
fwhm_factor = 2.0
height_factor = 1./np.pi
-
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(LorentzianModel, self).__init__(lorentzian, *args, **kwargs)
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(LorentzianModel, self).__init__(lorentzian, **kwargs)
self.set_param_hint('sigma', min=0)
self.set_param_hint('fwhm', expr=fwhm_expr(self))
self.set_param_hint('height', expr=height_expr(self))
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative, ampscale=1.25)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class VoigtModel(Model):
- __doc__ = voigt.__doc__ + COMMON_DOC if voigt.__doc__ else ""
+ r"""A model based on a Voigt distribution function (see
+ http://en.wikipedia.org/wiki/Voigt_profile>), with four Parameters:
+ ``amplitude``, ``center``, ``sigma``, and ``gamma``. By default,
+ ``gamma`` is constrained to have value equal to ``sigma``, though it
+ can be varied independently. In addition, parameters ``fwhm`` and
+ ``height`` are included as constraints to report full width at half
+ maximum and maximum peak height, respectively. The definition for the
+ Voigt function used here is
+
+ .. math::
+
+ f(x; A, \mu, \sigma, \gamma) = \frac{A \textrm{Re}[w(z)]}{\sigma\sqrt{2 \pi}}
+
+ where
+
+ .. math::
+ :nowrap:
+
+ \begin{eqnarray*}
+ z &=& \frac{x-\mu +i\gamma}{\sigma\sqrt{2}} \\
+ w(z) &=& e^{-z^2}{\operatorname{erfc}}(-iz)
+ \end{eqnarray*}
+
+ and :func:`erfc` is the complimentary error function. As above,
+ ``amplitude`` corresponds to :math:`A`, ``center`` to
+ :math:`\mu`, and ``sigma`` to :math:`\sigma`. The parameter ``gamma``
+ corresponds to :math:`\gamma`.
+ If ``gamma`` is kept at the default value (constrained to ``sigma``),
+ the full width at half maximum is approximately :math:`3.6013\sigma`.
+
+ """
fwhm_factor = 3.60131
height_factor = 1./np.sqrt(2*np.pi)
-
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(VoigtModel, self).__init__(voigt, *args, **kwargs)
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(VoigtModel, self).__init__(voigt, **kwargs)
self.set_param_hint('sigma', min=0)
self.set_param_hint('gamma', expr='%ssigma' % self.prefix)
self.set_param_hint('fwhm', expr=fwhm_expr(self))
self.set_param_hint('height', expr=height_expr(self))
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative,
ampscale=1.5, sigscale=0.65)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class PseudoVoigtModel(Model):
- __doc__ = pvoigt.__doc__ + COMMON_DOC if pvoigt.__doc__ else ""
+ r"""A model based on a pseudo-Voigt distribution function
+ (see http://en.wikipedia.org/wiki/Voigt_profile#Pseudo-Voigt_Approximation),
+ which is a weighted sum of a Gaussian and Lorentzian distribution functions
+ with that share values for ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`)
+ and full width at half maximum (and so have constrained values of
+ ``sigma`` (:math:`\sigma`). A parameter ``fraction`` (:math:`\alpha`)
+ controls the relative weight of the Gaussian and Lorentzian components,
+ giving the full definition of
+
+ .. math::
+
+ f(x; A, \mu, \sigma, \alpha) = \frac{(1-\alpha)A}{\sigma_g\sqrt{2\pi}}
+ e^{[{-{(x-\mu)^2}/{{2\sigma_g}^2}}]}
+ + \frac{\alpha A}{\pi} \big[\frac{\sigma}{(x - \mu)^2 + \sigma^2}\big]
+
+ where :math:`\sigma_g = {\sigma}/{\sqrt{2\ln{2}}}` so that the full width
+ at half maximum of each component and of the sum is :math:`2\sigma`. The
+ :meth:`guess` function always sets the starting value for ``fraction`` at 0.5.
+ """
+
fwhm_factor = 2.0
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(PseudoVoigtModel, self).__init__(pvoigt, *args, **kwargs)
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(PseudoVoigtModel, self).__init__(pvoigt, **kwargs)
self.set_param_hint('sigma', min=0)
self.set_param_hint('fraction', value=0.5)
self.set_param_hint('fwhm', expr=fwhm_expr(self))
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative, ampscale=1.25)
pars['%sfraction' % self.prefix].set(value=0.5)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class MoffatModel(Model):
- __doc__ = moffat.__doc__ + COMMON_DOC if moffat.__doc__ else ""
+ r"""A model based on the Moffat distribution function
+ (see https://en.wikipedia.org/wiki/Moffat_distribution), with four Parameters:
+ ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`), a width parameter
+ ``sigma`` (:math:`\sigma`) and an exponent ``beta`` (:math:`\beta`).
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(MoffatModel, self).__init__(moffat, *args, **kwargs)
+ .. math::
+
+ f(x; A, \mu, \sigma, \beta) = A \big[(\frac{x-\mu}{\sigma})^2+1\big]^{-\beta}
+
+ the full width have maximum is :math:`2\sigma\sqrt{2^{1/\beta}-1}`.
+ :meth:`guess` function always sets the starting value for ``beta`` to 1.
+
+ Note that for (:math:`\beta=1`) the Moffat has a Lorentzian shape.
+ """
+
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(MoffatModel, self).__init__(moffat, **kwargs)
self.set_param_hint('sigma', min=0)
self.set_param_hint('beta')
self.set_param_hint('fwhm', expr="2*%ssigma*sqrt(2**(1.0/%sbeta)-1)" % (self.prefix, self.prefix))
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative, ampscale=0.5, sigscale=1.)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class Pearson7Model(Model):
- __doc__ = pearson7.__doc__ + COMMON_DOC if pearson7.__doc__ else ""
+ r"""A model based on a Pearson VII distribution (see
+ http://en.wikipedia.org/wiki/Pearson_distribution#The_Pearson_type_VII_distribution),
+ with four parameers: ``amplitude`` (:math:`A`), ``center``
+ (:math:`\mu`), ``sigma`` (:math:`\sigma`), and ``exponent`` (:math:`m`) in
+
+ .. math::
+
+ f(x; A, \mu, \sigma, m) = \frac{A}{\sigma{\beta(m-\frac{1}{2}, \frac{1}{2})}} \bigl[1 + \frac{(x-\mu)^2}{\sigma^2} \bigr]^{-m}
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(Pearson7Model, self).__init__(pearson7, *args, **kwargs)
+ where :math:`\beta` is the beta function (see :scipydoc:`special.beta` in
+ :mod:`scipy.special`). The :meth:`guess` function always
+ gives a starting value for ``exponent`` of 1.5.
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(Pearson7Model, self).__init__(pearson7, **kwargs)
self.set_param_hint('expon', value=1.5)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative)
pars['%sexpon' % self.prefix].set(value=1.5)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class StudentsTModel(Model):
- __doc__ = students_t.__doc__ + COMMON_DOC if students_t.__doc__ else ""
+ r"""A model based on a Student's t distribution function (see
+ http://en.wikipedia.org/wiki/Student%27s_t-distribution), with three Parameters:
+ ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and ``sigma`` (:math:`\sigma`) in
+
+ .. math::
+
+ f(x; A, \mu, \sigma) = \frac{A \Gamma(\frac{\sigma+1}{2})} {\sqrt{\sigma\pi}\,\Gamma(\frac{\sigma}{2})} \Bigl[1+\frac{(x-\mu)^2}{\sigma}\Bigr]^{-\frac{\sigma+1}{2}}
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(StudentsTModel, self).__init__(students_t, *args, **kwargs)
+
+ where :math:`\Gamma(x)` is the gamma function.
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(StudentsTModel, self).__init__(students_t, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class BreitWignerModel(Model):
- __doc__ = breit_wigner.__doc__ + COMMON_DOC if breit_wigner.__doc__ else ""
+ r"""A model based on a Breit-Wigner-Fano function (see
+ http://en.wikipedia.org/wiki/Fano_resonance>), with four Parameters:
+ ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`),
+ ``sigma`` (:math:`\sigma`), and ``q`` (:math:`q`) in
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(BreitWignerModel, self).__init__(breit_wigner, *args, **kwargs)
+ f(x; A, \mu, \sigma, q) = \frac{A (q\sigma/2 + x - \mu)^2}{(\sigma/2)^2 + (x - \mu)^2}
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(BreitWignerModel, self).__init__(breit_wigner, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative)
pars['%sq' % self.prefix].set(value=1.0)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class LognormalModel(Model):
- __doc__ = lognormal.__doc__ + COMMON_DOC if lognormal.__doc__ else ""
+ r"""A model based on the Log-normal distribution function
+ (see http://en.wikipedia.org/wiki/Lognormal), with three Parameters
+ ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and ``sigma``
+ (:math:`\sigma`) in
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(LognormalModel, self).__init__(lognormal, *args, **kwargs)
+ f(x; A, \mu, \sigma) = \frac{A e^{-(\ln(x) - \mu)/ 2\sigma^2}}{x}
+
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(LognormalModel, self).__init__(lognormal, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = self.make_params(amplitude=1.0, center=0.0, sigma=0.25)
pars['%ssigma' % self.prefix].set(min=0.0)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class DampedOscillatorModel(Model):
- __doc__ = damped_oscillator.__doc__ + COMMON_DOC if damped_oscillator.__doc__ else ""
+ r"""A model based on the Damped Harmonic Oscillator Amplitude
+ (see http://en.wikipedia.org/wiki/Harmonic_oscillator#Amplitude_part), with
+ three Parameters: ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and
+ ``sigma`` (:math:`\sigma`) in
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(DampedOscillatorModel, self).__init__(damped_oscillator, *args, **kwargs)
+ f(x; A, \mu, \sigma) = \frac{A}{\sqrt{ [1 - (x/\mu)^2]^2 + (2\sigma x/\mu)^2}}
+
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(DampedOscillatorModel, self).__init__(damped_oscillator, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative,
ampscale=0.1, sigscale=0.1)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class DampedHarmonicOscillatorModel(Model):
- __doc__ = dho.__doc__ + COMMON_DOC if dho.__doc__ else ""
+ r"""A model based on a variation of the Damped Harmonic Oscillator (see
+ http://en.wikipedia.org/wiki/Harmonic_oscillator), following the
+ definition given in DAVE/PAN (see https://www.ncnr.nist.gov/dave/) with
+ four Parameters: ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`),
+ ``sigma`` (:math:`\sigma`), and ``gamma`` (:math:`\gamma`) in
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(DampedOscillatorModel, self).__init__(dho, *args, **kwargs)
+ f(x; A, \mu, \sigma, \gamma) = \frac{A\sigma}{\pi [1 - \exp(-x/\gamma)]}
+ \Big[ \frac{1}{(x-\mu)^2 + \sigma^2} - \frac{1}{(x+\mu)^2 + \sigma^2} \Big]
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(DampedHarmonicOscillatorModel, self).__init__(dho, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative,
ampscale=0.1, sigscale=0.1)
pars['%sgamma' % self.prefix].set(value=1.0, min=0.0)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class ExponentialGaussianModel(Model):
- __doc__ = expgaussian.__doc__ + COMMON_DOC if expgaussian.__doc__ else ""
+ r"""A model of an Exponentially modified Gaussian distribution
+ (see http://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution) with
+ four Parameters ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`),
+ ``sigma`` (:math:`\sigma`), and ``gamma`` (:math:`\gamma`) in
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(ExponentialGaussianModel, self).__init__(expgaussian, *args, **kwargs)
+ .. math::
+
+ f(x; A, \mu, \sigma, \gamma) = \frac{A\gamma}{2}
+ \exp\bigl[\gamma({\mu - x + \gamma\sigma^2/2})\bigr]
+ {\operatorname{erfc}}\Bigl(\frac{\mu + \gamma\sigma^2 - x}{\sqrt{2}\sigma}\Bigr)
+
+
+ where :func:`erfc` is the complimentary error function.
+
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(ExponentialGaussianModel, self).__init__(expgaussian, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class SkewedGaussianModel(Model):
- __doc__ = skewed_gaussian.__doc__ + COMMON_DOC if skewed_gaussian.__doc__ else ""
- fwhm_factor = 2.354820
+ r"""A variation of the Exponential Gaussian, this uses a skewed normal distribution
+ (see http://en.wikipedia.org/wiki/Skew_normal_distribution), with Parameters
+ ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`), ``sigma`` (:math:`\sigma`),
+ and ``gamma`` (:math:`\gamma`) in
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(SkewedGaussianModel, self).__init__(skewed_gaussian, *args, **kwargs)
+ f(x; A, \mu, \sigma, \gamma) = \frac{A}{\sigma\sqrt{2\pi}}
+ e^{[{-{(x-\mu)^2}/{{2\sigma}^2}}]} \Bigl\{ 1 +
+ {\operatorname{erf}}\bigl[
+ \frac{\gamma(x-\mu)}{\sigma\sqrt{2}}
+ \bigr] \Bigr\}
+
+
+ where :func:`erf` is the error function.
+ """
+ fwhm_factor = 2.354820
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(SkewedGaussianModel, self).__init__(skewed_gaussian, **kwargs)
self.set_param_hint('sigma', min=0)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class DonaichModel(Model):
- __doc__ = donaich.__doc__ + COMMON_DOC if donaich.__doc__ else ""
+ r"""A model of an Doniach Sunjic asymmetric lineshape
+ (see http://www.casaxps.com/help_manual/line_shapes.htm), used in
+ photo-emission, with four Parameters ``amplitude`` (:math:`A`),
+ ``center`` (:math:`\mu`), ``sigma`` (:math:`\sigma`), and ``gamma``
+ (:math:`\gamma`) in
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(DonaichModel, self).__init__(donaich, *args, **kwargs)
+ f(x; A, \mu, \sigma, \gamma) = A\frac{\cos\bigl[\pi\gamma/2 + (1-\gamma)
+ \arctan{(x - \mu)}/\sigma\bigr]} {\bigr[1 + (x-\mu)/\sigma\bigl]^{(1-\gamma)/2}}
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(DonaichModel, self).__init__(donaich, **kwargs)
def guess(self, data, x=None, negative=False, **kwargs):
- """TODO: docstring in public method."""
pars = guess_from_peak(self, data, x, negative, ampscale=0.5)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
class PowerLawModel(Model):
- __doc__ = powerlaw.__doc__ + COMMON_DOC if powerlaw.__doc__ else ""
+ r"""A model based on a Power Law (see http://en.wikipedia.org/wiki/Power_law>),
+ with two Parameters: ``amplitude`` (:math:`A`), and ``exponent`` (:math:`k`), in:
+
+ .. math::
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(PowerLawModel, self).__init__(powerlaw, *args, **kwargs)
+ f(x; A, k) = A x^k
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(PowerLawModel, self).__init__(powerlaw, **kwargs)
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
try:
expon, amp = np.polyfit(np.log(x+1.e-14), np.log(data+1.e-14), 1)
except:
@@ -426,16 +722,27 @@ def guess(self, data, x=None, **kwargs):
pars = self.make_params(amplitude=np.exp(amp), exponent=expon)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class ExponentialModel(Model):
- __doc__ = exponential.__doc__ + COMMON_DOC if exponential.__doc__ else ""
+ r"""A model based on an exponential decay function
+ (see http://en.wikipedia.org/wiki/Exponential_decay) with two Parameters:
+ ``amplitude`` (:math:`A`), and ``decay`` (:math:`\tau`), in:
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(ExponentialModel, self).__init__(exponential, *args, **kwargs)
+ .. math::
+
+ f(x; A, \tau) = A e^{-x/\tau}
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(ExponentialModel, self).__init__(exponential, **kwargs)
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
+
try:
sval, oval = np.polyfit(x, np.log(abs(data)+1.e-15), 1)
except:
@@ -443,16 +750,46 @@ def guess(self, data, x=None, **kwargs):
pars = self.make_params(amplitude=np.exp(oval), decay=-1.0/sval)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
+
class StepModel(Model):
- __doc__ = step.__doc__ + COMMON_DOC if step.__doc__ else ""
+ r"""A model based on a Step function, with three Parameters:
+ ``amplitude`` (:math:`A`), ``center`` (:math:`\mu`) and ``sigma`` (:math:`\sigma`)
+ and four choices for functional form:
+
+ - ``linear`` (the default)
+
+ - ``atan`` or ``arctan`` for an arc-tangent function
+
+ - ``erf`` for an error function
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(StepModel, self).__init__(step, *args, **kwargs)
+ - ``logistic`` for a logistic function (see http://en.wikipedia.org/wiki/Logistic_function).
+
+ The step function starts with a value 0, and ends with a value of
+ :math:`A` rising to :math:`A/2` at :math:`\mu`, with :math:`\sigma`
+ setting the characteristic width. The forms are
+
+ .. math::
+ :nowrap:
+
+ \begin{eqnarray*}
+ & f(x; A, \mu, \sigma, {\mathrm{form={}'linear{}'}}) & = A \min{[1, \max{(0, \alpha)}]} \\
+ & f(x; A, \mu, \sigma, {\mathrm{form={}'arctan{}'}}) & = A [1/2 + \arctan{(\alpha)}/{\pi}] \\
+ & f(x; A, \mu, \sigma, {\mathrm{form={}'erf{}'}}) & = A [1 + {\operatorname{erf}}(\alpha)]/2 \\
+ & f(x; A, \mu, \sigma, {\mathrm{form={}'logistic{}'}})& = A [1 - \frac{1}{1 + e^{\alpha}} ]
+ \end{eqnarray*}
+
+ where :math:`\alpha = (x - \mu)/{\sigma}`.
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(StepModel, self).__init__(step, **kwargs)
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
if x is None:
return
ymin, ymax = min(data), max(data)
@@ -462,13 +799,49 @@ def guess(self, data, x=None, **kwargs):
pars['%ssigma' % self.prefix].set(value=(xmax-xmin)/7.0, min=0.0)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
class RectangleModel(Model):
- __doc__ = rectangle.__doc__ + COMMON_DOC if rectangle.__doc__ else ""
+ r"""A model based on a Step-up and Step-down function, with five
+ Parameters: ``amplitude`` (:math:`A`), ``center1`` (:math:`\mu_1`),
+ ``center2`` (:math:`\mu_2`), `sigma1`` (:math:`\sigma_1`) and
+ ``sigma2`` (:math:`\sigma_2`) and four choices for functional form
+ (which is used for both the Step up and the Step down:
+
+ - ``linear`` (the default)
+
+ - ``atan`` or ``arctan`` for an arc-tangent function
+
+ - ``erf`` for an error function
+
+ - ``logistic`` for a logistic function (see http://en.wikipedia.org/wiki/Logistic_function).
+
+ The function starts with a value 0, transitions to a value of
+ :math:`A`, taking the value :math:`A/2` at :math:`\mu_1`, with :math:`\sigma_1`
+ setting the characteristic width. The function then transitions again to
+ the value :math:`A/2` at :math:`\mu_2`, with :math:`\sigma_2` setting the
+ characteristic width. The forms are
+
+ .. math::
+ :nowrap:
+
+ \begin{eqnarray*}
+ &f(x; A, \mu, \sigma, {\mathrm{form={}'linear{}'}}) &= A \{ \min{[1, \max{(0, \alpha_1)}]} + \min{[-1, \max{(0, \alpha_2)}]} \} \\
+ &f(x; A, \mu, \sigma, {\mathrm{form={}'arctan{}'}}) &= A [\arctan{(\alpha_1)} + \arctan{(\alpha_2)}]/{\pi} \\
+ &f(x; A, \mu, \sigma, {\mathrm{form={}'erf{}'}}) &= A [{\operatorname{erf}}(\alpha_1) + {\operatorname{erf}}(\alpha_2)]/2 \\
+ &f(x; A, \mu, \sigma, {\mathrm{form={}'logistic{}'}}) &= A [1 - \frac{1}{1 + e^{\alpha_1}} - \frac{1}{1 + e^{\alpha_2}} ]
+ \end{eqnarray*}
- def __init__(self, *args, **kwargs):
- """TODO: docstring in public method."""
- super(RectangleModel, self).__init__(rectangle, *args, **kwargs)
+
+ where :math:`\alpha_1 = (x - \mu_1)/{\sigma_1}` and
+ :math:`\alpha_2 = -(x - \mu_2)/{\sigma_2}`.
+ """
+ def __init__(self, independent_vars=['x'], prefix='', missing=None,
+ name=None, **kwargs):
+ kwargs.update({'prefix': prefix, 'missing': missing,
+ 'independent_vars': independent_vars})
+ super(RectangleModel, self).__init__(rectangle, **kwargs)
self.set_param_hint('center1')
self.set_param_hint('center2')
@@ -477,7 +850,6 @@ def __init__(self, *args, **kwargs):
self.prefix))
def guess(self, data, x=None, **kwargs):
- """TODO: docstring in public method."""
if x is None:
return
ymin, ymax = min(data), max(data)
@@ -489,32 +861,49 @@ def guess(self, data, x=None, **kwargs):
pars['%ssigma2' % self.prefix].set(value=(xmax-xmin)/7.0, min=0.0)
return update_param_vals(pars, self.prefix, **kwargs)
+ __init__.__doc__ = COMMON_INIT_DOC
+ guess.__doc__ = COMMON_GUESS_DOC
-class ExpressionModel(Model):
- """Model from User-supplied expression.
-
- Parameters
- ----------
- expr: string of mathematical expression for model.
- independent_vars: list of strings to be set as variable names
- missing: None, 'drop', or 'raise'
- None: Do not check for null or missing values.
- 'drop': Drop null or missing observations in data.
- Use pandas.isnull if pandas is available; otherwise,
- silently fall back to numpy.isnan.
- 'raise': Raise a (more helpful) exception when data contains null
- or missing values.
- prefix: NOT supported for ExpressionModel
- """
+class ExpressionModel(Model):
idvar_missing = "No independent variable found in\n %s"
idvar_notfound = "Cannot find independent variables '%s' in\n %s"
no_prefix = "ExpressionModel does not support `prefix` argument"
def __init__(self, expr, independent_vars=None, init_script=None,
- *args, **kwargs):
- """TODO: docstring in public method."""
+ missing=None, **kws):
+ """Model from User-supplied expression.
+
+ Parameters
+ ----------
+ expr: string
+ mathematical expression for model.
+ independent_vars: list of strings or ``None``
+ variable names to use as independent variables
+ init_script: string or ``None``
+ initial script to run in asteval interpreter
+ missing: string or ``None``
+ how to handle `nan` and missing values in data. One of:
+
+ - 'none' or ``None``: Do not check for null or missing values (default)
+
+ - 'drop': Drop null or missing observations in data. if pandas is
+ installed, `pandas.isnull` is used, otherwise `numpy.isnan` is used.
+
+ - 'raise': Raise a (more helpful) exception when data contains nullz
+ or missing values.
+
+ kwargs : optional
+ keyword arguments to pass to :class:`Model`.
+
+ Notes
+ -----
+ 1. each instance of ExpressionModel will create and using its own
+ version of an asteval interpreter.
+ 2. prefix is **not supported** for ExpressionModel
+
+ """
# create ast evaluator, load custom functions
self.asteval = Interpreter()
for name in lineshapes.functions:
@@ -553,8 +942,8 @@ def __init__(self, expr, independent_vars=None, init_script=None,
lost = ', '.join(lost)
raise ValueError(self.idvar_notfound % (lost, self.expr))
- kwargs['independent_vars'] = independent_vars
- if 'prefix' in kwargs:
+ kws['independent_vars'] = independent_vars
+ if 'prefix' in kws:
raise Warning(self.no_prefix)
def _eval(**kwargs):
@@ -562,7 +951,7 @@ def _eval(**kwargs):
self.asteval.symtable[name] = val
return self.asteval.run(self.astcode)
- super(ExpressionModel, self).__init__(_eval, *args, **kwargs)
+ super(ExpressionModel, self).__init__(_eval, **kws)
# set param names here, and other things normally
# set in _parse_params(), which will be short-circuited.
diff --git a/lmfit/parameter.py b/lmfit/parameter.py
index 69b2af3c4..588557c97 100644
--- a/lmfit/parameter.py
+++ b/lmfit/parameter.py
@@ -53,25 +53,33 @@ def within_tol(x, y, atol, rtol):
else:
return False
-
class Parameters(OrderedDict):
- """A dictionary of all the Parameters required to specify a fit model.
+ """An ordered dictionary of all the Parameter objects required to
+ specify a fit model. All minimization and Model fitting routines in
+ lmfit will use exactly one Parameters object, typically given as the
+ first argument to the objective function.
- All keys must be strings, and valid Python symbol names, and all values
- must be Parameters.
+ All keys of a Parameters() instance must be strings, and valid Python
+ symbol names, so that the name must match ``[a-z_][a-z0-9_]*`` and
+ cannot be a Python reserved word.
- Custom methods:
- ---------------
+ All values of a Parameters() instance must be Parameter objects.
- add()
- add_many()
- dumps() / dump()
- loads() / load()
+ A Parameters() instance includes an asteval interpreter used for
+ evaluation of constrained Parameters.
+ Parameters() support copying and pickling, and have methods to convert
+ to and from serializations using json strings.
"""
def __init__(self, asteval=None, *args, **kwds):
- """TODO: add public method docstring."""
+ """
+ Arguments
+ ----------
+ asteval : ``None`` or instance of asteval.Interpreter
+ instance of Interpretr to use for constraint expressions.
+ If ``None``, a new interpreter will be created.
+ """
super(Parameters, self).__init__(self)
self._asteval = asteval
@@ -276,15 +284,37 @@ def pretty_print(self, oneline=False, colwidth=8, precision=4, fmt='g',
def add(self, name, value=None, vary=True, min=-inf, max=inf, expr=None,
brute_step=None):
- """Convenience function for adding a Parameter.
+ """Add a Parameter.
- Example
- -------
- p = Parameters()
- p.add(name, value=XX, ...)
+ Arguments
+ ---------
+ name : string
+ name of parameter. Must match ``[a-z_][a-z0-9_]*`` and
+ cannot be a Python reserved word.
+ value : ``None`` or float
+ floating point value for parameter, typically the *initial value*.
+ vary : bool (default ``True``)
+ whether the parameter should be varied in the fit.
+ min : float (default ``-np.inf``)
+ lower bound for parameter value.
+ max : float (default ``np.inf``)
+ upper bound for parameter value.
+ expr : ``None`` or string
+ expression in terms of other parameter names to constrain value.
+ brute_step : ``None``
+ size of step to take when using the `brute()` method.
+
+ Examples
+ --------
+ >>> params = Parameters()
+ >>> params.add('xvar', value=0.50, min=0, max=1)
+ >>> params.add('yvar', expr='1.0 - xvar')
+
+ which is equivalent to:
- is equivalent to:
- p[name] = Parameter(name=name, value=XX, ....
+ >>> params = Parameters()
+ >>> params['xvar'] = Parameter(name='xvar', value=0.50, min=0, max=1)
+ >>> params['yvar'] = Parameter(name='yvar', expr='1.0 - xvar')
"""
if isinstance(name, Parameter):
@@ -295,31 +325,27 @@ def add(self, name, value=None, vary=True, min=-inf, max=inf, expr=None,
brute_step=brute_step))
def add_many(self, *parlist):
- """Convenience function for adding a list of Parameters.
+ """Add many parameters, using a sequence of tuples.
- Parameters
+ Arguments
----------
- parlist : sequence
+ parlist : sequence of tuples
A sequence of tuples, or a sequence of `Parameter` instances. If it
is a sequence of tuples, then each tuple must contain at least the
- name. The order in each tuple is the following:
-
- name, value, vary, min, max, expr, brute_step
-
- Example
- -------
- p = Parameters()
- # add a sequence of tuples
- p.add_many( (name1, val1, True, None, None, None, None),
- (name2, val2, True, 0.0, None, None, None),
- (name3, val3, False, None, None, None, None),
- (name4, val4))
-
- # add a sequence of Parameter
- f = Parameter('name5', val5)
- g = Parameter('name6', val6)
- p.add_many(f, g)
+ name. The order in each tuple must be `(name, value, vary, min, max, expr, brute_step)`
+ Examples
+ --------
+ >>> params = Parameters()
+ # add with tuples: (NAME VALUE VARY MIN MAX EXPR BRUTE_STEP)
+ >>> params.add_many(('amp', 10, True, None, None, None, None),
+ ... ('cen', 4, True, 0.0, None, None, None),
+ ... ('wid', 1, False, None, None, None, None),
+ ... ('frac', 0.5))
+ # add a sequence of Parameters
+ >>> f = Parameter('par_f', 100)
+ >>> g = Parameter('par_g', 2.)
+ >>> params.add_many(f, g)
"""
for para in parlist:
if isinstance(para, Parameter):
@@ -333,12 +359,11 @@ def valuesdict(self):
Returns
-------
- An ordered dictionary of name:value pairs for each Parameter.
- This is distinct from the Parameters itself, as it has values of
- the Parameter values, not the full Parameter object.
-
+ vals : ordered dict
+ An ordered dictionary of name:value pairs for each Parameter.
+ This is distinct from the Parameters itself, as it has values of
+ the Parameter *values*, not the full Parameter object.
"""
-
return OrderedDict(((p.name, p.value) for p in self.values()))
def dumps(self, **kws):
@@ -348,7 +373,8 @@ def dumps(self, **kws):
Returns
-------
- json string representation of Parameters
+ s : string
+ json string representation of Parameters
See Also
--------
@@ -373,7 +399,8 @@ def loads(self, s, **kws):
Returns
-------
- None. Parameters are updated as a side-effect
+ ``None``
+ Parameters are updated as a side-effect
See Also
--------
@@ -425,7 +452,8 @@ def load(self, fp, **kws):
Returns
-------
- None. Parameters are updated as a side-effect
+ ``None``.
+ Parameters are updated as a side-effect
See Also
--------
@@ -436,32 +464,29 @@ def load(self, fp, **kws):
class Parameter(object):
- """A Parameter is an object used to define a Fit Model.
- Attributes
- ----------
- name : str
- Parameter name.
- value : float
- The numerical value of the Parameter.
- vary : bool
- Whether the Parameter is fixed during a fit.
- min : float
- Lower bound for value (np.-inf means no lower bound).
- max : float
- Upper bound for value (np.inf means no upper bound).
- expr : str
- An expression specifying constraints for the parameter.
- stderr : float
- The estimated standard error for the best-fit value.
- correl : dict
- Specifies correlation with the other fitted Parameter after a fit.
- Of the form `{'decay': 0.404, 'phase': -0.020, 'frequency': 0.102}`
+ """A Parameter is an object that can be varied in a fit, or one of the
+ controlling variables in a model. It is a central component of lmfit,
+ and all minimization and modelling methods use Parameter objects.
+
+ A Parameter has a `name` attribute, and a scalar floating point
+ `value`. It also has a `vary` attribute that describes whether the
+ value should be varied during the minimization. Finite bounds can be
+ placed on the Parameter's value by setting its `min` and/or `max`
+ attributes. A Parameter can also have its value determined by a
+ mathematical expression of other Parameter values held in the `expr`
+ attrribute. Addition attributes include `brute_step` used as the step
+ size to use in a brute-force minimization, and `user_data` reserved
+ exclusively for user's need.
+
+ After a minimization, a Parameter may also gain other attributes,
+ including `stderr` holding the estimated standard error in the
+ Parameter's value, and `correl`, a dictionary of correlation values
+ with other Parameters used in the minimization.
"""
-
- def __init__(self, name=None, value=None, vary=True,
- min=-inf, max=inf, expr=None, brute_step=None):
+ def __init__(self, name=None, value=None, vary=True, min=-inf, max=inf,
+ expr=None, brute_step=None, user_data=None):
"""
Parameters
----------
@@ -472,13 +497,25 @@ def __init__(self, name=None, value=None, vary=True,
vary : bool, optional
Whether the Parameter is fixed during a fit.
min : float, optional
- Lower bound for value (np.-inf means no lower bound).
+ Lower bound for value (-inf means no lower bound).
max : float, optional
- Upper bound for value (np.inf means no upper bound).
+ Upper bound for value (inf means no upper bound).
expr : str, optional
Mathematical expression used to constrain the value during the fit.
brute_step : float, optional
- Step size for grid points in brute force method.
+ Step size for grid points in brute force method (use `0` for no step size).
+ user_data : optional
+ user-definable extra attribute used for a parameter.
+
+ Attributes
+ ----------
+ stderr : float
+ The estimated standard error for the best-fit value.
+ correl : dict
+ A dictionary of the correlation with the other fitted Parameter
+ after a fit, of the form::
+
+ `{'decay': 0.404, 'phase': -0.020, 'frequency': 0.102}`
"""
self.name = name
self._val = value
@@ -519,6 +556,30 @@ def set(self, value=None, vary=None, min=None, max=None, expr=None,
Step size for grid points in brute force method. To remove the step
size you must supply 0 ("zero").
+ Notes
+ -----
+
+ Each argument to `set()` has a default value of ``None``, which
+ will the current value for the attribute unchanged. Thus, to lift
+ a lower or upper bound, passing in ``None`` will not work. Instead,
+ you must set these to ``np.inf`` or ``-np.inf``, as with::
+
+ par.set(min=None) # no, will leave lower bound unchanged
+ par.set(min=-np.inf) # yes.
+
+ Similarly, to clear an expression, pass a blank string, (not
+ ``None``!) as with::
+
+ par.set(expr=None) # leaves expression unchanged.
+ par.set(expr='') # removes expression
+
+ Explicitly setting a value or setting `vary=True` will also
+ clear the expression.
+
+ Finally, to clear the brute_step size, pass ``0``, not ``None``::
+
+ par.set(brute_step=None) # leaves brute_step unchanged
+ par.set(brute_step=0) # removes brute_step
"""
if value is not None:
diff --git a/lmfit/printfuncs.py b/lmfit/printfuncs.py
index 58e763c4e..d883cbf33 100644
--- a/lmfit/printfuncs.py
+++ b/lmfit/printfuncs.py
@@ -65,19 +65,27 @@ def fit_report(inpars, modelpars=None, show_correl=True, min_correl=0.1,
The report contains the best-fit values for the parameters and their
uncertainties and correlations.
- arguments
+ Parameters
----------
- inpars Parameters from fit or Minizer object returned from
- a fit.
- modelpars Optional Known Model Parameters [None]
- show_correl whether to show list of sorted correlations [True]
- min_correl smallest correlation absolute value to show [0.1]
- sort_pars If True, then fit_report will show parameter names
- sorted in alphanumerical order. If False, then the
- parameters will be listed in the order they were
- added to the Parameters dictionary. If sort_pars is
- callable, then this (one argument) function is used
- to extract a comparison key from each list element.
+ inpars : Parameters
+ input Parameters from fit or MinimizerResult returned from a fit.
+ modelpars : optional
+ known Model Parameters
+ show_correl : bool, default ``True``
+ whether to show list of sorted correlations
+ min_correl : float, default 0.1
+ smallest correlation absolute value to show.
+ sort_pars : bool, default ``False``, or callable
+ whether to show parameter names sorted in alphanumerical order. If
+ ``False``, then the parameters will be listed in the order they were
+ added to the Parameters dictionary. If callable, then this (one
+ argument) function is used to extract a comparison key from each
+ list element.
+
+ Returns
+ -------
+ text : string
+ multi-line text of fit report
"""
if isinstance(inpars, Parameters):
diff --git a/tests/test_NIST_Strd.py b/tests/test_NIST_Strd.py
index aec9a8f5b..fd053c1c7 100644
--- a/tests/test_NIST_Strd.py
+++ b/tests/test_NIST_Strd.py
@@ -9,7 +9,7 @@
HASPYLAB = False
for arg in sys.argv:
- if 'nose' in arg:
+ if 'nose' in arg or 'pytest' in arg:
HASPYLAB = False
if HASPYLAB:
diff --git a/tests/test_algebraic_constraint2.py b/tests/test_algebraic_constraint2.py
index ab64cef01..3557eae52 100644
--- a/tests/test_algebraic_constraint2.py
+++ b/tests/test_algebraic_constraint2.py
@@ -8,7 +8,7 @@
# Turn off plotting if run by nosetests.
WITHPLOT = True
for arg in sys.argv:
- if 'nose' in arg:
+ if 'nose' in arg or 'pytest' in arg:
WITHPLOT = False
if WITHPLOT: