Skip to content

Commit

Permalink
update doc stable references for 0.1.4 release
Browse files Browse the repository at this point in the history
  • Loading branch information
speediedan committed May 24, 2022
1 parent b06f090 commit 04fad4a
Show file tree
Hide file tree
Showing 9 changed files with 43 additions and 43 deletions.
12 changes: 6 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,18 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).

### Added

- Added LR scheduler reinitialization functionality ([#2](https://github.com/speediedan/finetuning-scheduler/pull/2))
- Added advanced usage documentation
- Added advanced scheduling examples
- added notebook-based tutorial link
- LR scheduler reinitialization functionality ([#2](https://github.com/speediedan/finetuning-scheduler/pull/2))
- advanced usage documentation
- advanced scheduling examples
- notebook-based tutorial link
- enhanced cli-based example hparam logging among other code clarifications

### Changed

### Fixed

- addressed URI length limit for custom badge
- allow new deberta fast tokenizer conversion warning for transformers >= 4.19
### Changed

### Deprecated

## [0.1.3] - 2022-05-04
Expand Down
2 changes: 1 addition & 1 deletion CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ date-released: 2022-02-04
authors:
- family-names: "Dale"
given-names: "Dan"
version: 0.1.3
version: 0.1.4
identifiers:
- description: "Finetuning Scheduler (all versions)"
type: doi
Expand Down
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
______________________________________________________________________

<p align="center">
<a href="https://finetuning-scheduler.readthedocs.io/en/latest/">Docs</a> •
<a href="https://finetuning-scheduler.readthedocs.io/en/stable/">Docs</a> •
<a href="#Setup">Setup</a> •
<a href="#examples">Examples</a> •
<a href="#community">Community</a>
Expand All @@ -17,7 +17,7 @@ ______________________________________________________________________
[![PyPI Status](https://badge.fury.io/py/finetuning-scheduler.svg)](https://badge.fury.io/py/finetuning-scheduler)
![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/finetuning-scheduler?color=%23000080)\
[![codecov](https://codecov.io/gh/speediedan/finetuning-scheduler/branch/main/graph/badge.svg)](https://codecov.io/gh/speediedan/finetuning-scheduler)
[![ReadTheDocs](https://readthedocs.org/projects/finetuning-scheduler/badge/?version=latest)](https://finetuning-scheduler.readthedocs.io/en/latest/)
[![ReadTheDocs](https://readthedocs.org/projects/finetuning-scheduler/badge/?version=latest)](https://finetuning-scheduler.readthedocs.io/en/stable/)
[![DOI](https://zenodo.org/badge/455666112.svg)](https://zenodo.org/badge/latestdoi/455666112)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/speediedan/finetuning-scheduler/blob/master/LICENSE)

Expand All @@ -27,7 +27,7 @@ ______________________________________________________________________

<img width="300px" src="docs/source/_static/images/fts/fts_explicit_loss_anim.gif" alt="FinetuningScheduler explicit loss animation" align="right"/>

[FinetuningScheduler](https://finetuning-scheduler.readthedocs.io/en/latest/api/finetuning_scheduler.fts.html#finetuning_scheduler.fts.FinetuningScheduler) is simple to use yet powerful, offering a number of features that facilitate model research and exploration:
[FinetuningScheduler](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.fts.html#finetuning_scheduler.fts.FinetuningScheduler) is simple to use yet powerful, offering a number of features that facilitate model research and exploration:

- easy specification of flexible finetuning schedules with explicit or regex-based parameter selection
- implicit schedules for initial/naive model exploration
Expand Down Expand Up @@ -103,7 +103,7 @@ from finetuning_scheduler import FinetuningScheduler
trainer = Trainer(callbacks=[FinetuningScheduler()])
```

Get started by following [the Finetuning Scheduler introduction](https://finetuning-scheduler.readthedocs.io/en/latest/index.html) which includes a [CLI-based example](https://finetuning-scheduler.readthedocs.io/en/latest/index.html#scheduled-finetuning-superglue) or by following the [notebook-based](https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/finetuning-scheduler.html) Finetuning Scheduler tutorial.
Get started by following [the Finetuning Scheduler introduction](https://finetuning-scheduler.readthedocs.io/en/stable/index.html) which includes a [CLI-based example](https://finetuning-scheduler.readthedocs.io/en/stable/index.html#scheduled-finetuning-superglue) or by following the [notebook-based](https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/finetuning-scheduler.html) Finetuning Scheduler tutorial.

______________________________________________________________________

Expand All @@ -112,7 +112,7 @@ ______________________________________________________________________
### Scheduled Finetuning For SuperGLUE

- [Notebook-based Tutorial](https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/finetuning-scheduler.html)
- [CLI-based Tutorial](https://finetuning-scheduler.readthedocs.io/en/latest/#scheduled-finetuning-superglue)
- [CLI-based Tutorial](https://finetuning-scheduler.readthedocs.io/en/stable/#scheduled-finetuning-superglue)

______________________________________________________________________

Expand Down Expand Up @@ -140,7 +140,7 @@ To ensure maximum stability, the latest PyTorch Lightning patch release fully te

Finetuning Scheduler is developed and maintained by the community in close communication with the [PyTorch Lightning team](https://pytorch-lightning.readthedocs.io/en/latest/governance.html#leads). Thanks to everyone in the community for their tireless effort building and improving the immensely useful core PyTorch Lightning project.

PR's welcome! Please see the [contributing guidelines](https://finetuning-scheduler.readthedocs.io/en/latest/generated/CONTRIBUTING.html) (which are essentially the same as PyTorch Lightning's).
PR's welcome! Please see the [contributing guidelines](https://finetuning-scheduler.readthedocs.io/en/stable/generated/CONTRIBUTING.html) (which are essentially the same as PyTorch Lightning's).

______________________________________________________________________

Expand Down
6 changes: 3 additions & 3 deletions docs/source/advanced/lr_scheduler_reinitialization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ sanity-checked prior to training initiation.

Note that specifying LR scheduler reinitialization configurations is only supported for phases >= ``1``. This is because
for finetuning phase ``0``, the LR scheduler configuration will be the scheduler that you initiate your training session
with, usually via the ``configure_optimizer`` method of :external+pl:class:`~pytorch_lightning.core.module.LightningModule`.
with, usually via the ``configure_optimizer`` method of :external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`.

.. tip::

Expand Down Expand Up @@ -233,7 +233,7 @@ could use:
name: Implicit_Reinit_LR_Scheduler
Note that an initial lr scheduler configuration should also still be provided per usual (again, typically via the
``configure_optimizer`` method of :external+pl:class:`~pytorch_lightning.core.module.LightningModule`) and the initial
``configure_optimizer`` method of :external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`) and the initial
lr scheduler configuration can differ in lr scheduler type and configuration from the configuration specified in
:paramref:`~finetuning_scheduler.fts.FinetuningScheduler.reinit_lr_cfg` applied at each phase transition. Because the
same schedule is applied at each phase transition, the ``init_pg_lrs`` list is not supported in an implicit finetuning
Expand Down Expand Up @@ -277,7 +277,7 @@ training phases:
= ``1.0e-05``)

Phase ``0`` in :yellow-highlight:`yellow` (passed to our
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` via the ``model``
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` via the ``model``
definition in our :external+pl:class:`~pytorch_lightning.utilities.cli.LightningCLI` configuration) uses a
:external+torch:class:`~torch.optim.lr_scheduler.LinearLR` scheduler (defined in
``./config/advanced/fts_explicit_reinit_lr.yaml``) with the initial lr defined via the shared initial optimizer
Expand Down
8 changes: 4 additions & 4 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ foundational model experimentation with flexible finetuning schedules. Training
If you're exploring using the :class:`~finetuning_scheduler.fts.FinetuningScheduler`, this is a great place
to start!
You may also find the `notebook-based tutorial <https://pytorchlightning.github.io/lightning-tutorials/notebooks/lightning_examples/finetuning-scheduler.html>`_
useful and for those using the :doc:`LightningCLI<cli/lightning_cli>`, there is a
useful and for those using the :doc:`LightningCLI<common/lightning_cli>`, there is a
:ref:`CLI-based<scheduled-finetuning-superglue>` example at the bottom of this introduction.

Setup
Expand Down Expand Up @@ -92,7 +92,7 @@ thawed/unfrozen parameter groups associated with each finetuning phase as desire
and executed in ascending order.

1. First, generate the default schedule to ``Trainer.log_dir``. It will be named after your
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` subclass with the suffix
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` subclass with the suffix
``_ft_schedule.yaml``.

.. code-block:: python
Expand Down Expand Up @@ -289,7 +289,7 @@ A demonstration of the scheduled finetuning callback
:class:`~finetuning_scheduler.fts.FinetuningScheduler` using the
`RTE <https://huggingface.co/datasets/viewer/?dataset=super_glue&config=rte>`_ and
`BoolQ <https://github.com/google-research-datasets/boolean-questions>`_ tasks of the
`SuperGLUE <https://super.gluebenchmark.com/>`_ benchmark and the :doc:`LightningCLI<cli/lightning_cli>`
`SuperGLUE <https://super.gluebenchmark.com/>`_ benchmark and the :doc:`LightningCLI<common/lightning_cli>`
is available under ``./fts_examples/``.

Since this CLI-based example requires a few additional packages (e.g. ``transformers``, ``sentencepiece``), you
Expand Down Expand Up @@ -446,7 +446,7 @@ Footnotes
:caption: Examples

Notebook-based Finetuning Scheduler tutorial <https://pytorchlightning.github.io/lightning-tutorials/notebooks/lightning_examples/finetuning-scheduler.html>
CLI-based Finetuning Scheduler tutorial <https://finetuning-scheduler.readthedocs.io/en/latest/#example-scheduled-finetuning-for-superglue>
CLI-based Finetuning Scheduler tutorial <https://finetuning-scheduler.readthedocs.io/en/stable/#example-scheduled-finetuning-for-superglue>

.. toctree::
:maxdepth: 1
Expand Down
4 changes: 2 additions & 2 deletions finetuning_scheduler/__about__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import time

_this_year = time.strftime("%Y")
__version__ = "0.1.3"
__version__ = "0.1.4"
__author__ = "Dan Dale"
__author_email__ = "danny.dale@gmail.com"
__license__ = "Apache-2.0"
Expand Down Expand Up @@ -31,7 +31,7 @@
Documentation
-------------
- https://finetuning-scheduler.readthedocs.io/en/stable/
- https://finetuning-scheduler.readthedocs.io/en/0.1.3/
- https://finetuning-scheduler.readthedocs.io/en/0.1.4/
"""

__all__ = ["__author__", "__author_email__", "__copyright__", "__docs__", "__homepage__", "__license__", "__version__"]
28 changes: 14 additions & 14 deletions finetuning_scheduler/fts.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def __init__(
:ref:`LR Scheduler Reinitialization<explicit-lr-reinitialization-schedule>` for more complex
schedule configurations (including per-phase LR scheduler reinitialization). If a schedule is not
provided, will generate and execute a default finetuning schedule using the provided
:external+pl:class:`~pytorch_lightning.core.module.LightningModule`. See
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`. See
:ref:`the default schedule<index:The Default Finetuning Schedule>`. Defaults to ``None``.
max_depth: Maximum schedule depth to which the defined finetuning schedule should be executed. Specifying -1
or an integer > (number of defined schedule layers) will result in the entire finetuning schedule being
Expand All @@ -105,7 +105,7 @@ def __init__(
:class:`~finetuning_scheduler.fts_supporters.FTSCheckpoint`) checkpoint
before finetuning depth transitions. Defaults to ``True``.
gen_ft_sched_only: If ``True``, generate the default finetuning schedule to ``Trainer.log_dir`` (it will be
named after your :external+pl:class:`~pytorch_lightning.core.module.LightningModule` subclass with
named after your :external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` subclass with
the suffix ``_ft_schedule.yaml``) and exit without training. Typically used to generate a default
schedule that will be adjusted by the user before training. Defaults to ``False``.
epoch_transitions_only: If ``True``, use epoch-driven stopping criteria exclusively (rather than composing
Expand Down Expand Up @@ -186,8 +186,8 @@ def freeze_before_training(self, pl_module: "pl.LightningModule") -> None:
finetuning schedule.
Args:
pl_module (:external+pl:class:`~pytorch_lightning.core.module.LightningModule`): The target
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` to freeze parameters of
pl_module (:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`): The target
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` to freeze parameters of
"""
self.freeze(modules=pl_module)

Expand Down Expand Up @@ -308,15 +308,15 @@ def setup(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule", stage: O
1. configure the :class:`~finetuning_scheduler.fts_supporters.FTSEarlyStopping`
callback (if relevant)
2. initialize the :attr:`~finetuning_scheduler.fts.FinetuningScheduler._fts_state`
3. freeze the target :external+pl:class:`~pytorch_lightning.core.module.LightningModule` parameters
3. freeze the target :external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` parameters
Finally, initialize the :class:`~finetuning_scheduler.fts.FinetuningScheduler`
training session in the training environment.
Args:
trainer (:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer`): The
:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer` object
pl_module (:external+pl:class:`~pytorch_lightning.core.module.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` object
pl_module (:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` object
stage: The ``RunningStage.{SANITY_CHECKING,TRAINING,VALIDATING}``. Defaults to None.
Raises:
Expand Down Expand Up @@ -366,8 +366,8 @@ def on_fit_start(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule") -
Args:
trainer (:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer`): The
:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer` object
pl_module (:external+pl:class:`~pytorch_lightning.core.module.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` object
pl_module (:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` object
Raises:
MisconfigurationException: If more than 1 optimizers are configured indicates a configuration error
Expand Down Expand Up @@ -461,8 +461,8 @@ def on_train_epoch_start(self, trainer: "pl.Trainer", pl_module: "pl.LightningMo
Args:
trainer (:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer`): The
:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer` object
pl_module (:external+pl:class:`~pytorch_lightning.core.module.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` object
pl_module (:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` object
"""
# if resuming from a ckpt, we need to sync fts_state
if self._fts_state._resume_fit_from_ckpt:
Expand Down Expand Up @@ -503,8 +503,8 @@ def on_before_zero_grad(self, trainer: "pl.Trainer", pl_module: "pl.LightningMod
Args:
trainer (:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer`): The
:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer` object
pl_module (:external+pl:class:`~pytorch_lightning.core.module.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` object
pl_module (:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`): The
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` object
optimizer (:class:`~torch.optim.Optimizer`): The :class:`~torch.optim.Optimizer` to which parameter groups
will be configured and added.
"""
Expand All @@ -516,7 +516,7 @@ def on_train_end(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule") -
Args:
trainer (:external+pl:class:`~pytorch_lightning.trainer.trainer.Trainer`): _description_
pl_module (:external+pl:class:`~pytorch_lightning.core.module.LightningModule`): _description_
pl_module (:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule`): _description_
"""
assert self._fts_state._ft_sync_objects is not None
self.sync(self._fts_state._ft_sync_objects, self._fts_state._ft_sync_props)
4 changes: 2 additions & 2 deletions finetuning_scheduler/fts_supporters.py
Original file line number Diff line number Diff line change
Expand Up @@ -944,7 +944,7 @@ def save_schedule(schedule_name: str, layer_config: Dict, dump_loc: Union[str, o
Returns:
os.PathLike: The path to the generated schedule, by default ``Trainer.log_dir`` and named after the
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` subclass in use with the suffix
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` subclass in use with the suffix
``_ft_schedule.yaml``)
"""
dump_path = pathlib.Path(dump_loc)
Expand All @@ -967,7 +967,7 @@ def gen_ft_schedule(module: Module, dump_loc: Union[str, os.PathLike]) -> os.Pat
dump_loc: The directory to which the generated schedule (.yaml) should be written
Returns:
os.PathLike: The path to the generated schedule, by default ``Trainer.log_dir`` and named after the
:external+pl:class:`~pytorch_lightning.core.module.LightningModule` subclass in use with the suffix
:external+pl:class:`~pytorch_lightning.core.lightning.LightningModule` subclass in use with the suffix
``_ft_schedule.yaml``)
"""
# Note: This initial default finetuning schedule generation approach is intentionally simple/naive but is
Expand Down
Loading

0 comments on commit 04fad4a

Please sign in to comment.