Releases: JuliaAI/MLJBase.jl
Releases · JuliaAI/MLJBase.jl
v0.20.12
MLJBase v0.20.12
- (enhancement) Add support for model operations (such as
transform
) that return extra information for merging into machine reports (#806)
Merged pull requests:
v0.20.11
v0.20.10
MLJBase v0.20.10
Closed issues:
- Incorrect display of one-dimensional range in case of scaling function (#797)
Merged pull requests:
v0.20.9
v0.20.8
MLJBase v0.20.8
- (enhancement) Add support for feature importances (#798) @OkonSamuel
Merged pull requests:
v0.20.7
v0.20.6
MLJBase v0.20.6
- (enhancement) Support mulithreaded training of learning networks. Call as in
fit!(node, acceleration=CPUThreads())
(#785) @olivierlabayle - (enhancement) Create interface point for specifying
acceleration
mode when "exporting" a learning network as new model type, by supportingacceleration
as keyword argument ofreturn!
method (#785) @olivierlabayle - (enhancement) Add
acceleration
andcache
fields toStack
type (#785) @olivierlabayle
Merged pull requests:
v0.20.5
v0.20.4
MLJBase v0.20.4
- bump compat for LossFunctions.jl dependency.
Merged pull requests:
- Fix typo (#771) (@KronosTheLate)
- Bump LossFunctions.jl (#773) (@juliohm)
- bump 0.20.4 (#774) (@OkonSamuel)
- For 0.20.4 release (#776) (@OkonSamuel)
v0.20.3
MLJBase v0.20.3
- Add a standard error column to the display of
PerformanceEvaluation
objects (as returned byevaluate!
/evaluate
) (#766) @rikhuijzer
Merged pull requests:
- Add std to show for
PerformanceEvaluation
(#766) (@rikhuijzer) - Make running tests via
TestEnv
easier (#769) (@rikhuijzer) - For a 0.20.3 release (#770) (@ablaom)