Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 0.13.0 #961

Merged
merged 5 commits into from
May 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Added

### Changed

### Fixed

## [0.13.0] - 2023-05-17

### Added
- Add support for compiled PyTorch modules using the `torch.compile` function, introduced in [PyTorch 2.0 release](https://pytorch.org/get-started/pytorch-2.0/), which can greatly improve performance on new GPU architectures; to use it, initialize your net with the `compile=True` argument, further compilation arguments can be specified using the dunder notation, e.g. `compile__dynamic=True`
- Add a class [`DistributedHistory`](https://skorch.readthedocs.io/en/latest/history.html#skorch.history.DistributedHistory) which should be used when training in a multi GPU setting (#955)
- `SkorchDoctor`: A helper class that assists in understanding and debugging the neural net training, see [this notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Skorch_Doctor.ipynb) (#912)
- When using `AccelerateMixin`, it is now possible to prevent unwrapping of the modules by setting `unwrap_after_train=True`
- When using `AccelerateMixin`, it is now possible to prevent unwrapping of the modules by setting `unwrap_after_train=True` (#963)

### Changed

Expand All @@ -22,7 +30,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `_get_param_names` returns a list instead of a generator so that subsequent
error messages return useful information instead of a generator `repr`
string (#925)
- Fixed a bug that caused modules to not be sufficiently unwrapped at the end of training when using `AccelerateMixin`, which could prevent them from being pickleable
- Fixed a bug that caused modules to not be sufficiently unwrapped at the end of training when using `AccelerateMixin`, which could prevent them from being pickleable (#963)

## [0.12.1] - 2022-11-18

Expand Down Expand Up @@ -306,3 +314,4 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
[0.11.0]: https://github.com/skorch-dev/skorch/compare/v0.10.0...v0.11.0
[0.12.0]: https://github.com/skorch-dev/skorch/compare/v0.11.0...v0.12.0
[0.12.1]: https://github.com/skorch-dev/skorch/compare/v0.12.0...v0.12.1
[0.13.0]: https://github.com/skorch-dev/skorch/compare/v0.12.1...v0.13.0
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.12.1dev
0.13.0
10 changes: 0 additions & 10 deletions skorch/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -333,13 +333,3 @@ def __call__(self, dataset, y=None, groups=None):
def __repr__(self):
# pylint: disable=useless-super-delegation
return super(ValidSplit, self).__repr__()

# TODO remove in skorch 0.13
class CVSplit(ValidSplit):
def __init__(self, *args, **kwargs):
warnings.warn(
f"{self.__class__.__name__} is deprecated, use the new name ValidSplit instead",
DeprecationWarning,
stacklevel=2,
)
super().__init__(*args, **kwargs)
13 changes: 0 additions & 13 deletions skorch/net.py
Original file line number Diff line number Diff line change
Expand Up @@ -1777,19 +1777,6 @@ def get_iterator(self, dataset, training=False):
mini-batches.

"""
# TODO: remove in skorch v0.13, see #835
if isinstance(dataset, DataLoader):
msg = (
"get_iterator was called with a DataLoader instance but it should be "
"called with a Dataset instead. Probably, you implemented a custom "
"run_single_epoch method. Its first argument is now a DataLoader, "
"not a Dataset. For more information, look here: "
"https://skorch.readthedocs.io/en/latest/user/FAQ.html"
"#migration-from-0-11-to-0-12. This will raise an error in skorch v0.13"
)
warnings.warn(msg, DeprecationWarning)
return dataset

if training:
kwargs = self.get_params_for('iterator_train')
iterator = self.iterator_train
Expand Down
8 changes: 0 additions & 8 deletions skorch/tests/test_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -918,11 +918,3 @@ def test_random_state_not_used_raises(self, valid_split_cls):

def test_random_state_and_float_does_not_raise(self, valid_split_cls):
valid_split_cls(0.5, random_state=0) # does not raise

def test_cvsplit_deprecation(self):
from skorch.dataset import CVSplit
with pytest.warns(
DeprecationWarning,
match="is deprecated, use the new name ValidSplit instead",
):
CVSplit()
38 changes: 0 additions & 38 deletions skorch/tests/test_net.py
Original file line number Diff line number Diff line change
Expand Up @@ -3778,44 +3778,6 @@ def evaluation_step(self, batch, training=False):
y_pred = net.predict(X)
assert y_pred.shape == (100, 2)

# TODO: remove in skorch v0.13
def test_net_with_custom_run_single_epoch(self, net_cls, module_cls, data):
# See #835. We changed the API to initialize the DataLoader only once
# per epoch. This test is to make sure that code that overrides
# run_single_epoch still works for the time being.
from skorch.dataset import get_len

class MyNet(net_cls):
def run_single_epoch(self, dataset, training, prefix, step_fn, **fit_params):
# code as in skorch<=0.11
# first argument should now be an iterator, not a dataset
if dataset is None:
return

# make sure that the "dataset" (really the DataLoader) can still
# access the Dataset if needed
assert hasattr(dataset, 'dataset')

batch_count = 0
for batch in self.get_iterator(dataset, training=training):
self.notify("on_batch_begin", batch=batch, training=training)
step = step_fn(batch, **fit_params)
self.history.record_batch(prefix + "_loss", step["loss"].item())
batch_size = (get_len(batch[0]) if isinstance(batch, (tuple, list))
else get_len(batch))
self.history.record_batch(prefix + "_batch_size", batch_size)
self.notify("on_batch_end", batch=batch, training=training, **step)
batch_count += 1

self.history.record(prefix + "_batch_count", batch_count)

net = MyNet(module_cls, max_epochs=2)
X, y = data
with pytest.deprecated_call():
net.fit(X, y)
# does not raise
net.predict(X)


class TestNetSparseInput:
@pytest.fixture(scope='module')
Expand Down