Skip to content

Commit

Permalink
Merge branch 'master' into patch-2
Browse files Browse the repository at this point in the history
  • Loading branch information
carmocca authored Feb 1, 2021
2 parents 0ba6c9c + 6668ed7 commit 9bc51f2
Show file tree
Hide file tree
Showing 10 changed files with 144 additions and 98 deletions.
129 changes: 79 additions & 50 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,60 +223,44 @@ We welcome any useful contribution! For your convenience here's a recommended wo

### Question & Answer

1. **How can I help/contribute?**
#### How can I help/contribute?

All help is extremely welcome - reporting bugs, fixing documentation, adding test cases, solving issues and preparing bug fixes. To solve some issues you can start with label [good first issue](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) or chose something close to your domain with label [help wanted](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). Before you start to implement anything check that the issue description that it is clear and self-assign the task to you (if it is not possible, just comment that you take it and we assign it to you...).
All types of contributions are welcome - reporting bugs, fixing documentation, adding test cases, solving issues, and preparing bug fixes.
To get started with code contributions, look for issues marked with the label [good first issue](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) or chose something close to your domain with the label [help wanted](https://github.com/PyTorchLightning/pytorch-lightning/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). Before coding, make sure that the issue description is clear and comment on the issue so that we can assign it to you (or simply self-assign if you can).

2. **Is there a recommendation for branch names?**
#### Is there a recommendation for branch names?

We do not rely on the name convention so far you are working with your own fork. Anyway it would be nice to follow this convention `<type>/<issue-id>_<short-name>` where the types are: `bugfix`, `feature`, `docs`, `tests`, ...
We recommend you follow this convention `<type>/<issue-id>_<short-name>` where the types are: `bugfix`, `feature`, `docs`, or `tests` (but if you are using your own fork that's optional).

3. **How to rebase my PR?**
#### How to rebase my PR?

We recommend creating a PR in separate branch other than `master`, especially if you plan submitting several changes and do not want to wait until the first one is resolved (we can work on them in parallel).
We recommend creating a PR in a separate branch other than `master`, especially if you plan to submit several changes and do not want to wait until the first one is resolved (we can work on them in parallel).

First, make sure you have set [upstream](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork) by running:
First, make sure you have set [upstream](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork) by running:

```bash
git remote add upstream https://github.com/PyTorchLightning/pytorch-lightning.git
```

You'll know its set up right if you run `git remote -v` and see something similar to this:

```bash
origin https://github.com/{YOUR_USERNAME}/pytorch-lightning.git (fetch)
origin https://github.com/{YOUR_USERNAME}/pytorch-lightning.git (push)
upstream https://github.com/PyTorchLightning/pytorch-lightning.git (fetch)
upstream https://github.com/PyTorchLightning/pytorch-lightning.git (push)
```

Now you can update your master with upstream's master by running:

```bash
git fetch --all --prune
git checkout master
git merge upstream/master
```
```bash
git remote add upstream https://github.com/PyTorchLightning/pytorch-lightning.git
```

Finally, checkout your feature branch and rebase it with master before pushing up your feature branch:
You'll know its set up right if you run `git remote -v` and see something similar to this:

```bash
git checkout my-PR-branch
git rebase master
# follow git instructions to resolve conflicts
git push -f
```
```bash
origin https://github.com/{YOUR_USERNAME}/pytorch-lightning.git (fetch)
origin https://github.com/{YOUR_USERNAME}/pytorch-lightning.git (push)
upstream https://github.com/PyTorchLightning/pytorch-lightning.git (fetch)
upstream https://github.com/PyTorchLightning/pytorch-lightning.git (push)
```

Eventually, you can perform the rebasing directly from upstream after setting it up:
Checkout your feature branch and rebase it with upstream's master before pushing up your feature branch:

```bash
git fetch --all --prune
git rebase upstream/master
# follow git instructions to resolve conflicts
git push -f
```
```bash
git fetch --all --prune
git rebase upstream/master
# follow git instructions to resolve conflicts
git push -f
```

4. **How to add new tests**
#### How to add new tests?**

We are using [pytest](https://docs.pytest.org/en/stable/) in Pytorch Lightning.

Expand All @@ -293,19 +277,16 @@ Here is the process to create a new test

```python
# TEST SHOULD BE IN YOUR FILE: tests/..../...py
# TEST CODE TEMPLATE

# RUN OUR TEST WITH: pytest tests/..../...py::test_explain_what_is_being_tested --verbose --capture=no

# TEST CODE TEMPLATE

# pytest decorator
# [OPTIONAL] pytest decorator
# @pytest.mark.skipif(not torch.cuda.is_available(), reason="test requires GPU machine")
def test_explain_what_is_being_tested(tmpdir):
"""
Test description about text reason to be
"""

# os.environ["PL_DEV_DEBUG"] = '1' optional. When activated, you can use internal trainer.dev_debugger
# os.environ["PL_DEV_DEBUG"] = '1' # [OPTIONAL] When activated, you can use internal trainer.dev_debugger

class ExtendedModel(BoringModel):
...
Expand All @@ -320,12 +301,60 @@ def test_explain_what_is_being_tested(tmpdir):
...
)
trainer.fit(model)
result = trainer.test()
trainer.test() # [OPTIONAL]

# assert the behaviour is correct.
assert ...
assert ...
```
run our/your test with
```bash
python -m pytest tests/..../...py::test_explain_what_is_being_tested --verbose --capture=no
```

#### How to contribute bugfixes/features?

Currently we have separate streams/branches for bugfixes/features and release from the default branch (`master`).
Bugfixes should land in this `master` branch and features should land in `release/X.y-dev`.
This means that when starting your contribution and creating a branch according to question 2) you should start this new branch from master or future release dev branch.
Later in PR creation also pay attention to properly set the target branch, usually the starting (base) and target branch are the same.

_Note, that this flow may change after the 1.2 release as we will adjust releasing strategy._

#### How to fix PR with mixed base and target branches?

Sometimes you start your PR as a bug-fix but it turns out to be more of a feature (or the other way around).
Do not panic, the solution is very straightforward and quite simple.
All you need to do are these two steps in arbitrary order:
- Ask someone from Core to change the base/target branch to the correct one
- Rebase or cherry-pick your commits onto the correct base branch...

Let's show how to deal with the git...
the sample case is moving a PR from `master` to `release/1.2-dev` assuming my branch name is `my-branch`
and the last true master commit is `ccc111` and your first commit is `mmm222`.
* **Cherry-picking** way
```bash
git checkout my-branch
# create a local backup of your branch
git checkout -b my-branch-backup
# reset your branch to the correct base
git reset release/1.2-dev --hard
# ACTION: this step is much easier to do with IDE
# so open one and cherry-pick your last commits from `my-branch-backup`
# resolve all eventual conflict as the new base may contain different code
# when all done, push back to the open PR
git push -f
```
* **Rebasing way**, see more about [rebase onto usage](https://womanonrails.com/git-rebase-onto)
```bash
git checkout my-branch
# rebase your commits on the correct branch
git rebase --onto release/1.2-dev ccc111
# if there is no collision you shall see just success
# eventually you would need to resolve collision and in such case follow the instruction in terminal
# when all done, push back to the open PR
git push -f
```


### Bonus Workflow Tip

Expand Down
6 changes: 2 additions & 4 deletions .github/workflows/docs-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,11 +40,9 @@ jobs:
${{ runner.os }}-pip-
- name: Install dependencies
env:
HOROVOD_BUILD_ARCH_FLAGS: "-mfma"
HOROVOD_WITHOUT_MXNET: 1
HOROVOD_WITHOUT_TENSORFLOW: 1
run: |
# remove Horovod from requirements
python -c "fname = 'requirements/extra.txt' ; lines = [line for line in open(fname).readlines() if not line.startswith('horovod')] ; open(fname, 'w').writelines(lines)"
# python -m pip install --upgrade --user pip
pip install --requirement requirements.txt --upgrade-strategy only-if-needed --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --quiet
pip install --requirement requirements/extra.txt
Expand Down
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Fixed

- Fixed `TensorBoardLogger` not closing `SummaryWriter` on `finalize` ([#5696](https://github.com/PyTorchLightning/pytorch-lightning/pull/5696))


- Fixed filtering of pytorch "unsqueeze" warning when using DP ([#5622](https://github.com/PyTorchLightning/pytorch-lightning/pull/5622))


Expand All @@ -29,6 +32,8 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

- Fixed a race condition in `ModelCheckpoint` when checking if a checkpoint file exists ([#5144](https://github.com/PyTorchLightning/pytorch-lightning/pull/5144))

- Remove unnecessary intermediate layers in Dockerfiles ([#5697](https://github.com/PyTorchLightning/pytorch-lightning/pull/5697))


## [1.1.6] - 2021-01-26

Expand Down
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ Scale your models, not the boilerplate.**
[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
[![Discourse status](https://img.shields.io/discourse/status?server=https%3A%2F%2Fforums.pytorchlightning.ai)](https://forums.pytorchlightning.ai/)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
[![Next Release](https://img.shields.io/badge/Next%20Release-Nov%2021-<COLOR>.svg)](https://shields.io/)

<!--
[![CodeFactor](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning/badge)](https://www.codefactor.io/repository/github/pytorchlightning/pytorch-lightning)
Expand Down Expand Up @@ -224,7 +223,7 @@ with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:

```python
class LitAutoEncoder(pl.LightningModule):
def training_step(self, batch, batch_idx, opt_idx):
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)

Expand Down
39 changes: 20 additions & 19 deletions dockers/base-conda/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -59,21 +59,21 @@ RUN apt-get update -qq && \
rm -rf /root/.cache && \
rm -rf /var/lib/apt/lists/*

ENV PATH="/root/miniconda3/bin:$PATH"
ENV LD_LIBRARY_PATH="/root/miniconda3/lib:$LD_LIBRARY_PATH"
ENV CUDA_TOOLKIT_ROOT_DIR="/usr/local/cuda"

ENV HOROVOD_GPU_OPERATIONS=NCCL
ENV HOROVOD_WITH_PYTORCH=1
ENV HOROVOD_WITHOUT_TENSORFLOW=1
ENV HOROVOD_WITHOUT_MXNET=1
ENV HOROVOD_WITH_GLOO=1
ENV HOROVOD_WITHOUT_MPI=1
#ENV MAKEFLAGS="-j$(nproc)"
ENV MAKEFLAGS="-j1"
ENV TORCH_CUDA_ARCH_LIST="3.7;5.0;6.0;7.0;7.5"

ENV CONDA_ENV=lightning
ENV \
PATH="/root/miniconda3/bin:$PATH" \
LD_LIBRARY_PATH="/root/miniconda3/lib:$LD_LIBRARY_PATH" \
CUDA_TOOLKIT_ROOT_DIR="/usr/local/cuda" \
HOROVOD_GPU_OPERATIONS=NCCL \
HOROVOD_WITH_PYTORCH=1 \
HOROVOD_WITHOUT_TENSORFLOW=1 \
HOROVOD_WITHOUT_MXNET=1 \
HOROVOD_WITH_GLOO=1 \
HOROVOD_WITHOUT_MPI=1 \
# MAKEFLAGS="-j$(nproc)" \
MAKEFLAGS="-j1" \
TORCH_CUDA_ARCH_LIST="3.7;5.0;6.0;7.0;7.5" \
CONDA_ENV=lightning

COPY environment.yml environment.yml

# conda init
Expand All @@ -90,10 +90,11 @@ RUN conda create -y --name $CONDA_ENV python=${PYTHON_VERSION} pytorch=${PYTORCH
conda clean -ya && \
rm environment.yml

ENV PATH /root/miniconda3/envs/${CONDA_ENV}/bin:$PATH
ENV LD_LIBRARY_PATH="/root/miniconda3/envs/${CONDA_ENV}/lib:$LD_LIBRARY_PATH"
# if you want this environment to be the default one, uncomment the following line:
ENV CONDA_DEFAULT_ENV=${CONDA_ENV}
ENV \
PATH /root/miniconda3/envs/${CONDA_ENV}/bin:$PATH \
LD_LIBRARY_PATH="/root/miniconda3/envs/${CONDA_ENV}/lib:$LD_LIBRARY_PATH" \
# if you want this environment to be the default one, uncomment the following line:
CONDA_DEFAULT_ENV=${CONDA_ENV}

COPY ./requirements/extra.txt requirements-extra.txt
COPY ./requirements/test.txt requirements-test.txt
Expand Down
29 changes: 15 additions & 14 deletions dockers/base-cuda/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ ARG CMAKE_VERSION=3.18.4

SHELL ["/bin/bash", "-c"]
# https://techoverflow.net/2019/05/18/how-to-fix-configuring-tzdata-interactive-input-when-building-docker-images/
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Prague

ENV PATH="$PATH:/root/.local/bin"
ENV CUDA_TOOLKIT_ROOT_DIR="/usr/local/cuda"
ENV \
DEBIAN_FRONTEND=noninteractive \
TZ=Europe/Prague \
PATH="$PATH:/root/.local/bin" \
CUDA_TOOLKIT_ROOT_DIR="/usr/local/cuda"

RUN apt-get update -qq && \
apt-get install -y --no-install-recommends \
Expand Down Expand Up @@ -68,15 +68,16 @@ RUN apt-get update -qq && \
rm -rf /root/.cache && \
rm -rf /var/lib/apt/lists/*

ENV HOROVOD_GPU_OPERATIONS=NCCL
ENV HOROVOD_WITH_PYTORCH=1
ENV HOROVOD_WITHOUT_TENSORFLOW=1
ENV HOROVOD_WITHOUT_MXNET=1
ENV HOROVOD_WITH_GLOO=1
ENV HOROVOD_WITHOUT_MPI=1
#ENV MAKEFLAGS="-j$(nproc)"
ENV MAKEFLAGS="-j1"
ENV TORCH_CUDA_ARCH_LIST="3.7;5.0;6.0;7.0;7.5"
ENV \
HOROVOD_GPU_OPERATIONS=NCCL \
HOROVOD_WITH_PYTORCH=1 \
HOROVOD_WITHOUT_TENSORFLOW=1 \
HOROVOD_WITHOUT_MXNET=1 \
HOROVOD_WITH_GLOO=1 \
HOROVOD_WITHOUT_MPI=1 \
# MAKEFLAGS="-j$(nproc)" \
MAKEFLAGS="-j1" \
TORCH_CUDA_ARCH_LIST="3.7;5.0;6.0;7.0;7.5"

COPY ./requirements.txt requirements.txt
COPY ./requirements/ ./requirements/
Expand Down
19 changes: 11 additions & 8 deletions dockers/base-xla/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,9 @@ SHELL ["/bin/bash", "-c"]

ARG CONDA_VERSION=4.7.12
# for skipping configurations
ENV DEBIAN_FRONTEND=noninteractive
ENV CONDA_ENV=lightning
ENV \
DEBIAN_FRONTEND=noninteractive \
CONDA_ENV=lightning

# show system inforation
RUN lsb_release -a && cat /etc/*-release
Expand All @@ -53,8 +54,9 @@ RUN apt-get update -qq && \
rm -rf /root/.cache && \
rm -rf /var/lib/apt/lists/*

ENV PATH="/root/miniconda3/bin:$PATH"
ENV LD_LIBRARY_PATH="/root/miniconda3/lib:$LD_LIBRARY_PATH"
ENV \
PATH="/root/miniconda3/bin:$PATH" \
LD_LIBRARY_PATH="/root/miniconda3/lib:$LD_LIBRARY_PATH"
COPY environment.yml environment.yml

RUN conda create -y --name $CONDA_ENV && \
Expand All @@ -67,10 +69,11 @@ RUN conda create -y --name $CONDA_ENV && \
conda clean -ya && \
rm environment.yml

ENV PATH /root/miniconda3/envs/${CONDA_ENV}/bin:$PATH
ENV LD_LIBRARY_PATH="/root/miniconda3/envs/${CONDA_ENV}/lib:$LD_LIBRARY_PATH"
# if you want this environment to be the default one, uncomment the following line:
ENV CONDA_DEFAULT_ENV=${CONDA_ENV}
ENV \
PATH=/root/miniconda3/envs/${CONDA_ENV}/bin:$PATH \
LD_LIBRARY_PATH="/root/miniconda3/envs/${CONDA_ENV}/lib:$LD_LIBRARY_PATH" \
# if you want this environment to be the default one, uncomment the following line:
CONDA_DEFAULT_ENV=${CONDA_ENV}

# Disable cache
RUN pip --version && \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/new-project.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ Now you own the train loop!

.. code-block:: python
def training_step(self, batch, batch_idx, opt_idx):
def training_step(self, batch, batch_idx, optimizer_idx):
# access your optimizers with use_pl_optimizer=False. Default is True
(opt_a, opt_b, opt_c) = self.optimizers(use_pl_optimizer=True)
Expand Down
1 change: 1 addition & 0 deletions pytorch_lightning/loggers/tensorboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,7 @@ def save(self) -> None:
@rank_zero_only
def finalize(self, status: str) -> None:
self.experiment.flush()
self.experiment.close()
self.save()

@property
Expand Down
9 changes: 9 additions & 0 deletions tests/loggers/test_tensorboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -265,3 +265,12 @@ def configure_optimizers(self):

mock_count_steps = [m[2]["step"] for m in mock_log_metrics.mock_calls if "count_step" in m[2]["metrics"]]
assert model._indexes == mock_count_steps


@mock.patch('pytorch_lightning.loggers.tensorboard.SummaryWriter')
def test_tensorboard_finalize(summary_writer, tmpdir):
""" Test that the SummaryWriter closes in finalize. """
logger = TensorBoardLogger(save_dir=tmpdir)
logger.finalize("any")
summary_writer().flush.assert_called()
summary_writer().close.assert_called()

0 comments on commit 9bc51f2

Please sign in to comment.