Version 0.9.0
This release of skorch contains a few minor improvements and some nice additions. As always, we fixed a few bugs and improved the documentation. Our learning rate scheduler now optionally logs learning rate changes to the history; moreover, it now allows the user to choose whether an update step should be made after each batch or each epoch.
If you always longed for a metric that would just use whatever is defined by your criterion, look no further than loss_scoring
. Also, skorch now allows you to easily change the kind of nonlinearity to apply to the module's output when predict
and predict_proba
are called, by passing the predict_nonlinearity
argument.
Besides these changes, we improved the customization potential of skorch. First of all, the criterion
is now set to train
or valid
, depending on the phase -- this is useful if the criterion should act differently during training and validation. Next we made it easier to add custom modules, optimizers, and criteria to your neural net; this should facilitate implementing architectures like GANs. Consult the docs for more on this. Conveniently, net.save_params
can now persist arbitrary attributes, including those custom modules.
As always, these improvements wouldn't have been possible without the community. Please keep asking questions, raising issues, and proposing new features. We are especially grateful to those community members, old and new, who contributed via PRs:
Aaron Berk
guybuk
kqf
Michał Słapek
Scott Sievert
Yann Dubois
Zhao Meng
Here is the full list of all changes:
Added
- Added the
event_name
argument forLRScheduler
for optional recording of LR changes insidenet.history
. NOTE: Supported only in Pytorch>=1.4 - Make it easier to add custom modules or optimizers to a neural net class by automatically registering them where necessary and by making them available to set_params
- Added the
step_every
argument forLRScheduler
to set whether the scheduler step should be taken on every epoch or on every batch. - Added the
scoring
module withloss_scoring
function, which computes the net's loss (usingget_loss
) on provided input data. - Added a parameter
predict_nonlinearity
toNeuralNet
which allows users to control the nonlinearity to be applied to the module output when callingpredict
andpredict_proba
(#637, #661) - Added the possibility to save the criterion with
save_params
and with checkpoint callbacks - Added the possibility to save custom modules with
save_params
and with checkpoint callbacks
Changed
- Removed support for schedulers with a
batch_step()
method inLRScheduler
. - Raise
FutureWarning
inCVSplit
whenrandom_state
is not used. Will raise an exception in a future (#620) - The behavior of method
net.get_params
changed to make it more consistent with sklearn: it will no longer return "learned" attributes likemodule_
; therefore, functions likesklearn.base.clone
, when called with a fitted net, will no longer return a fitted net but instead an uninitialized net; if you want a copy of a fitted net, usecopy.deepcopy
instead;net.get_params
is used under the hood by many sklearn functions and classes, such asGridSearchCV
, whose behavior may thus be affected by the change. (#521, #527) - Raise
FutureWarning
when usingCyclicLR
scheduler, because the default behavior has changed from taking a step every batch to taking a step every epoch. (#626) - Set train/validation on criterion if it's a PyTorch module (#621)
- Don't pass
y=None
toNeuralNet.train_split
to enable the direct use of split functions without positionaly
in their signatures. This is useful when working with unsupervised data (#605). to_numpy
is now able to unpack dicts and lists/tuples (#657, #658)- When using
CrossEntropyLoss
, softmax is now automatically applied to the output when callingpredict
orpredict_proba
Fixed
- Fixed a bug where
CyclicLR
scheduler would update during both training and validation rather than just during training. - Fixed a bug introduced by moving the
optimizer.zero_grad()
call outside of the train step function, making it incompatible with LBFGS and other optimizers that call the train step several times per batch (#636) - Fixed pickling of the
ProgressBar
callback (#656)