Releases: tBuLi/symfit
symfit 0.4.3
Introduces the first global minimizer by wrapping scipy
's differential evolution algorithm, and adds the possibility of chaining several minimizers.
Additionally, this fixes an important bug due to an incompatibility with sympy==1.2
by demanding 1.1.1
.
symfit 0.4.2
Bugfix release. Most important fixes include
- Arguments no longer use inspect to find their name, it is recommended to provide names explicitly. The old syntax is still supported, although it will raise a deprication warning.
numpy >= 1.12
is demandedsymfit
Argument
-objects are full pickelableODEModel
s can now be integrated back in time too.ODEModel
s now have a__str__
, and to declare a__str__
has been made mandatory by adding it as anabstractmethod
onBaseModel
.- More minor bug fixes
symfit 0.4.0
Major overhaul of the internal API, making future development of symfit
easier.
Additionally covariance matrices are now calculated for all current fitting types, meaning all parameters uncertainties are now provided. Additionally, gradients can now also be calculated for ODEModels
due to the addition of finite differences as a default means of calculating gradients.
symfit 0.3.7
Bugfix release.
symfit 0.3.6
Apart from bug-fixes, the most important change in this version is the addition of the contrib module. This will hold useful side-project which depend on the symfit
core but are not a part of it.
Currently, this contains a visual guess tool, which aims to make providing good guesses for your model a lot easier. Give it a shot!
symfit 0.3.5
This version of symfit introduces a lot of improvements to the Fit object. Global fitting now works better, and the Fit object takes constraints.
Apart from this, it features many minor improvements and bug fixes.
symfit 0.3.2!
What better way to start the new year than with a version of symfit
?
This version introduces two great new fitting types: LinearLeastSquares
and NonLinearLeastSquares
.
Up until now all fitting in symfit
was done numerically and iteratively. However, LinearLeastSquares
is an implementation of the analytical solution to the least squares problem. Therefore, no more iterations. It's one step and you'll have your answer!
However, this only works with models linear in the parameters. For nonlinear models there is NonLinearLeastSquares
. NonLinearLeastSquares
works by approximating your model by it's first-order Taylor expansion and then iteratively improving the fit using LinearLeastSquares
.
These objects are an exciting step towards my goal of implementing constrained fitting in a sexy and generic manner throughout symfit
.
p.s. It also features some minor bug fixes.
symfit 0.3.0
After a long time, a brand new version. To throw around some prime marketing: "Nothing has changed - except everything".
The way models are handled has been completely redefined, allowing for great syntactic sugar such as:
- named dependent variables:
x, y = variables('x, y')
model = {y: a * x**2}
- And assigning data by name:
fit = Fit(model, x=xdata, y=ydata, sigma_y=sigma)
Furthermore this version finally comes with documentation of all the objects in the API, so symfit will finally be more useable and hopefully more easy to contribute to.
One downside: Python 2 support has been dropped in this version. I don't have the time to support both versions of Python, and frankly Python 3 just offers some language features I don't want to do without anymore.
Bug fix release [!]
The previous version introduced fitting with weights. Allthough the correct values were found for the parameters, it turns out that the errors in the parameters were grossly overestimated. It was caused by a disconnect between the residuals returned by scipy.optimize.leastsq
and what I assumed those residuals to be. This version fixes this problem, and also includes some other non-fundamental fixes.
Important: the errors in parameters are now the same as given by scipy.optimize.curve_fit
with absolute_sigma=True
because this is always larger than absolute_sigma=False
and it's better to overestimate errors when in doubt. I would love some feedback on which of the two is correct when dealing with real data with measurement errors. So if you know anything on this topic, please join the discussion here: #22
I do not wan't to simply add a keyword equivalent to absolute_sigma
because I consider it to be unpythonic and very unclear as to what it does.
Error support
It is now possible to do least squares fitting with weights. Two keywords have been included to make this possible: weights
and sigma
. Only one of the two can be provided, and they must have the same shape as ydata
.
Example:
from symfit.api import Variable, Parameter, Fit
t_data = np.array([1.4, 2.1, 2.6, 3.0, 3.3])
y_data = np.array([10, 20, 30, 40, 50])
sigma = 0.2
n = np.array([5, 3, 8, 15, 30])
sigma_t = sigma / np.sqrt(n)
# We now define our model
y = Variable()
g = Parameter()
t_model = (2 * y / g)**0.5
fit = Fit(t_model, y_data, t_data, sigma=sigma_t)
fit_result = fit.execute()