Skip to content

Commit

Permalink
prune changelog (Lightning-AI#1123)
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda authored Mar 12, 2020
1 parent 5e013f6 commit 9255e54
Showing 1 changed file with 1 addition and 199 deletions.
200 changes: 1 addition & 199 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,22 +28,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

## [0.7.1] - 2020-03-07

### Added

- _None_

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixes `print` issues and `data_loader` ([#1080](https://github.com/PyTorchLightning/pytorch-lightning/pull/1080))
Expand Down Expand Up @@ -209,10 +193,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Deprecated `tng_dataloader`

### Removed

- _None_

### Fixed

- Fixed an issue where the number of batches was off by one during training
Expand All @@ -235,10 +215,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Changed default for `amp_level` to `O1`

### Deprecated

- _None_

### Removed

- Removed the `print_weights_summary` argument from `Trainer`
Expand Down Expand Up @@ -270,14 +246,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Disabled auto GPU loading when restoring weights to prevent out of memory errors
- Changed logging, early stopping and checkpointing to occur by default

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug with samplers that do not specify `set_epoch`
Expand All @@ -287,10 +255,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

## [0.5.0] - 2019-09-26

### Added

- _None_

### Changed

- Changed `data_batch` argument to `batch` throughout
Expand All @@ -300,14 +264,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Changed `gradient_clip` argument to `gradient_clip_val`
- Changed `add_log_row_interval` to `row_log_interval`

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug with tensorboard logging in multi-gpu setup
Expand All @@ -329,14 +285,6 @@ memory utilization
- Changed gpu API to take integers as well (e.g. `gpus=2` instead of `gpus=[0, 1]`)
- All models now loaded on to CPU to avoid device and out of memory issues in PyTorch

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug where data types that implement `.to` but not `.cuda` would not be properly moved onto the GPU
Expand All @@ -350,41 +298,17 @@ memory utilization
- Added `GradientAccumulationScheduler` callback which can be used to schedule changes to the number of accumulation batches
- Added option to skip the validation sanity check by setting `nb_sanity_val_steps = 0`

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug when setting `nb_sanity_val_steps = 0`

## [0.4.7] - 2019-08-24

### Added

- _None_

### Changed

- Changed the default `val_check_interval` to `1.0`
- Changed defaults for `nb_val_batches`, `nb_tng_batches` and `nb_test_batches` to 0

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug where the full validation set as used despite setting `val_percent_check`
Expand All @@ -402,18 +326,6 @@ memory utilization
- Added support for data to be given as a `dict` or `list` with a single gpu
- Added support for `configure_optimizers` to return a single optimizer, two list (optimizers and schedulers), or a single list

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug where returning just an optimizer list (i.e. without schedulers) from `configure_optimizers` would throw an `Exception`
Expand All @@ -424,22 +336,6 @@ memory utilization

- Added `optimizer_step` method that can be overridden to change the standard optimizer behaviour

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- _None_

## [0.4.4] - 2019-08-12

### Added
Expand All @@ -452,85 +348,29 @@ memory utilization
- `validation_step` and `val_dataloader` are now optional
- `lr_scheduler` is now activated after epoch

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug where a warning would show when using `lr_scheduler` in `torch>1.1.0`
- Fixed a bug where an `Exception` would be thrown if using `torch.DistributedDataParallel` without using a `DistributedSampler`, this now throws a `Warning` instead

## [0.4.3] - 2019-08-10

### Added

- _None_

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug where accumulate gradients would scale the loss incorrectly

## [0.4.2] - 2019-08-08

### Added

- _None_

### Changed

- Changed install requirement to `torch==1.2.0`

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- _None_

## [0.4.1] - 2019-08-08

### Added

- _None_

### Changed

- Changed install requirement to `torch==1.1.0`

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- _None_

## [0.4.0] - 2019-08-08

### Added
Expand All @@ -542,10 +382,6 @@ memory utilization

- Changed `training_step` and `validation_step`, outputs will no longer be automatically reduced

### Deprecated

- _None_

### Removed

- Removed need for `Experiment` object in `Trainer`
Expand All @@ -554,49 +390,15 @@ memory utilization

- Fixed issues with reducing outputs from generative models (such as images and text)

## [0.3.6.1] - 2019-07-27

### Added

- _None_

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- Fixed a bug where `Experiment` object was not process safe, potentially causing logs to be overwritten

## [0.3.6] - 2019-07-25

### Added

- Added a decorator to do lazy data loading internally

### Changed

- _None_

### Deprecated

- _None_

### Removed

- _None_

### Fixed

- _None_
- Fixed a bug where `Experiment` object was not process safe, potentially causing logs to be overwritten

## [0.3.5] - 2019-MM-DD

Expand Down

0 comments on commit 9255e54

Please sign in to comment.