v0.4.0
- (Enhancment) Update to MLJBase 0.5.0 and MLJModels 0.4.0. The
following new scikit-learn models are thereby made available:
- ScikitLearn.jl
- SVM:
SVMClassifier
,SVMRegressor
,SVMNuClassifier
,
SVMNuRegressor
,SVMLClassifier
,SVMLRegressor
, - Linear Models (regressors):
ARDRegressor
,
BayesianRidgeRegressor
,ElasticNetRegressor
,
ElasticNetCVRegressor
,HuberRegressor
,LarsRegressor
,
LarsCVRegressor
,LassoRegressor
,LassoCVRegressor
,
LassoLarsRegressor
,LassoLarsCVRegressor
,
LassoLarsICRegressor
,LinearRegressor
,
OrthogonalMatchingPursuitRegressor
,
OrthogonalMatchingPursuitCVRegressor
,
PassiveAggressiveRegressor
,RidgeRegressor
,
RidgeCVRegressor
,SGDRegressor
,TheilSenRegressor
-
(New feature) The macro
@pipeline
allows one to construct linear
(non-branching) pipeline composite models with one line of code. One
may include static transformations (ordinary functions) in the
pipeline, as well as target transformations for the supervised case
(when one component model is supervised). -
(Breaking) Source nodes (type
Source
) now have akind
field,
which is either:input
,:target
or:other
, with:input
the
default value in thesource
constructor. If building a learning
network, and the network is to be exported as a standalone model,
then it is now necessary to tag the source nodes accordingly, as in
Xs = source(X)
andys = source(y, kind=:target)
. -
(Breaking) By virtue of the preceding change, the syntax for
exporting a learning network is simplified. Do?@from_network
for
details. Also, one now usesfitresults(N)
instead offit results(N, X, y)
andfitresults(N, X)
when exporting a learning
networkN
"by hand"; see the updated
manual
for details. -
(Breaking) One must explicitly state if a supervised learning
network being exported with@from_network
is probabilistic by
addingis_probablistic=true
to the macro expression. Before, this
information was unreliably inferred from the network. -
(Enhancement) Add macro-free method for loading model code into an arbitrary
module. Do?load
for details. -
(Enhancement)
@load
now returns a mode instance with default
hyperparameters (instead of nothing), as intree_model = @load DecisionTreeRegressor
-
(Breaking)
info("PCA")
now returns a named-tuple, instead of a
dictionary, of the properties of a the model named "PCA" -
(Breaking) The list returned by
models(conditional)
is now a list
of complete metadata entries (named-tuples, as returned by
info
). An entryproxy
appears in the list exactly when
conditional(proxy) == true
. Model query is simplified; for
examplemodels() do model model.is_supervised && model.is_pure_julia end
finds all pure julia supervised models. -
(Bug fix) Introduce new private methods to avoid relying on MLJBase
type piracy MLJBase
#30. -
(Enhancement) If
composite
is a a learning network exported as a
model, andm = machine(composite, args...)
thenreport(m)
returns the reports for each machine in the learning network, and
similarly forfitted_params(m)
. -
(Enhancement)
MLJ.table
,vcat
andhcat
now overloaded for
AbstractNode
, so that they can immediately be used in defining
learning networks. For example, ifX = source(rand(20,3))
and
y=source(rand(20))
thenMLJ.table(X)
andvcat(y, y)
both make
sense and define new nodes. -
(Enhancement)
pretty(X)
prints a pretty version of any tableX
,
complete with types and scitype annotations. Do?pretty
for
options. A wrap ofpretty_table
fromPrettyTables.jl
. -
(Enhancement)
std
is re-exported fromStatistics
-
(Enhancement) The manual and MLJ
cheatsheet
have been updated. -
Performance measures have been migrated to MLJBase, while the model
registry and model load/search facilities have migrated to
MLJModels. As relevant methods are re-exported to MLJ, this is
unlikely to effect many users.