Releases: JuliaAI/MLJ.jl
v0.16.6
v0.16.5
MLJ v0.16.5
Closed issues:
- Multiple Motivations for using same Mathematical Composite Functional notation f(g(h(x))) for language syntax to encode Custom objective/loss/cost functions (XGBoost), as to encode Distributed Parallel Workflow Pipeline sequence. (#488)
- Add link to TreeParzen from "Tuning Models" section of the manual, and the doc string for LatinHypercube (#690)
- Coercing exotic table types (#774)
- Remove cap on StatsBase in docs/Project.toml (#785)
- Improve docs around weight specification (#787)
- Bug:
evaluate!
crashes being called several times in a row when acceleration is used (#788) - Make it possible to use deterministic metrics for models providing probabilistic prediction types (#789)
- Add deterministic metric to Getting Started evaluate! example (#790)
- Pipeline with XGBoost doesn't seem to serialize properly (#794)
- MLJ universe graphic in README.md transparency issue (#796)
- Re-export logpdf from Distributions (#797)
- Measures for Multi-Target models (#800)
Merged pull requests:
v0.16.4
MLJ v0.16.4
-
Re-export
BinaryThresholdClassifier
from MLJModels (for wrapping binary probabilistic classifiers as deterministic classifiers using a user-specified threshold) -
Extend Distributions compatibility to version 0.25^
Merged pull requests:
v0.16.3
v0.16.2
v0.16.1
MLJ v0.16.1
- (enhancment) Explicitly include MLJIteration, re-exporting its methods and types, so that
using MLJIteration
is no longer required to make use of the newly released package (#139). For documentation, see here. - Update MLJBase and import the new packages MLJOpenML and MLJSerialization, which provide functionality contained in the older MLJBase versions. Should have no effect on the MLJ user (JuliaAI/MLJBase.jl#416)
Closed issues:
- Model wrapper for controlling iterative models. (#139)
- Restore broken ensemble testing (#683)
- No more symbols in CategoricalArrays (#691)
- Can't load KMeans from ParallelKMeans (#740)
- DecisionTreeClassifier does not appear to be a Supervised model. (#741)
- Unable to retrieve saved machines (#743)
- Need help in creating a MLJModelInterface.Model interface of a complex model (#744)
- Meaning of the various methods for unsupervised models ? (#748)
- Load issue (#752)
- MultinomialNBClassifier not available. (#753)
- Evaluate with acceleration is only working on a single worker (#754)
- Add to docs for new implementations:
fit
should not mutate model hyper-parameters (#755)
Merged pull requests:
- Typo (#750) (@PallHaraldsson)
- Doc updates. No new release (#751) (@ablaom)
- Add iteration docs (#759) (@ablaom)
- Add MLJIteration documentation to the MLJ manual. No new release (#760) (@ablaom)
- Improvements to landing page of manual (#761) (@ablaom)
- Documentation updates. No new release (#762) (@ablaom)
- Add MLJIteration and re-export it's constructors/types (#764) (@ablaom)
- Adaptations to further disintegration of MLJBase (#766) (@ablaom)
- For a 0.16.1 release (#767) (@ablaom)
v0.16.0
MLJ v0.16.0
Release notes:
Update MLJModels and MLJBase compatibility requirements. Includes some breaking changes. Most significantly note that @load
now returns a model type instead of an instance (see https://github.com/alan-turing-institute/MLJ.jl/blob/dev/docs/src/loading_model_code.md). For full list of changes, see:
MLJBase 0.17.0 release notes
MLJModels 0.14.0 release notes
Closed issues:
- Can't use @load within a module (#321)
- Add option to cache data at nodes of learning networks to avoid repeating operations (transform, predict, etc) (#702)
Merged pull requests:
v0.15.2
v0.15.1
MLJ v0.15.1
Closed issues:
- Unsupported const declaration (#715)
- TunedModel is not fitted with
measure=misclassification_rate
(#725)
Merged pull requests:
- Move from Travis CI to GitHub Actions CI (#717) (@DilumAluthge)
- Delete the
docs/src/_old
folder (#718) (@DilumAluthge) - switch ci to github actions (#719) (@ablaom)
- Update citations to JOSS paper (#720) (@ablaom)
- removed type piracy of show for MersenneTwister (#722) (@ExpandingMan)
- No new release. Update docs about pkg needed for DecisionTreeClassifier (#723) (@ablaom)
- Add entry to tuning section of manual for Latin hyper cube (#724) (@ablaom)
- Add examples to tuning (#728) (@ablaom)
- For a 0.15.1 release (bump [compat] for MLJTuning) (#729) (@ablaom)
v0.15.0
MLJ v0.15.0
- Extend compat for MLJBase, MLJModels, MLJScientificTypes, CategoricalArrays. Includes a minor breaking change in behaviour of the
coerce
method; see https://github.com/alan-turing-institute/MLJBase.jl/releases/tag/v0.16.0.
Closed issues:
- root: nothing does not appear to be a Supervised model. (#686)
- Review TagBot configurations for all MLJ repos (#692)
- less fidgity alternative to
@load
(#693) - ERROR: MethodError: no method matching PCA() (#699)
- MLJDecisionTreeInterface.jl (#700)
- Just a doc typo (#704)
- Saving snapshots of a TunedModel as it trains (#708)
- All CV scores in a TunedModel (#709)
- Undefvarerror when tuning a model (#711)
Merged pull requests:
- Shorten MLJ design paper following JOSS review (#676) (@ablaom)
- Design paper update. No release (#677) (@ablaom)
- Update Slack URL (#679) (@logankilpatrick)
- Doc update. No new release. (#680) (@ablaom)
- Paper references (#681) (@darenasc)
- Fix bibliography in paper. No new release (#682) (@ablaom)
- Suspend EnsembleModel testing (#684) (@ablaom)
- Update affiliations in paper. No new release (#688) (@ablaom)
- Add verbose affiliations to paper.md. No release (#694) (@ablaom)
- fife -> St Andrews . No release (#695) (@ablaom)
- Added latin_hypercube docs (#697) (@ludoro)
- Update model_search.md (#705) (@dsweber2)
- For 0.15 Release (#710) (@ablaom)
- Revert "Added latin_hypercube docs" (#712) (@ablaom)
- Extend compat for MLJBase, MLJModels, MLJScientificTypes, CategoricalArrays (#713) (@ablaom)