PyTorch implementation of Neural Context Flows for Meta-Learning of Dynamical Systems, accepted at ICLR 2025.
Warning
This implementation is not optimal ! We observed huge slowdowns when performing contextual self-modulation, mostly related to the following:
(1) forward-mode Jacobian-Vector Product (JVP) primitive to facilitate the Taylor expansions;
(2) functions transformations like torch.vmap
and torch.compile
proving too restrictive.
We are grateful for any pull-request addressing these issues. In the meantime, we recommend the optimized JAX implementation available at this link.
Neural Context Flow (NCF) is a framework for learning dynamical systems that can adapt to different environments/contexts, making it particularly valuable for scientific machine learning applications where the underlying system dynamics may vary across physical parameter values.
NCF is powered by the contextual self-modulation regularization mechanism. This inductive bias performs Taylor expansion of the vector field about the context vectors, resulting in several candidate trajectories. While the resulting training loss might be higher than the naive Neural ODE, we observe lower losses at adaptation time.
The NCF package is built around 4 extensible modules:
- a DataLoader: to load the dynamics datasets
- a Learner: a model, a context and the loss function
- a Trainer: the training and adaptation algorithms
- a VisualTester: to test and visualize the results
To run an experiment, follow the steps below:
- Install the package:
pip install -e .
- Navigate to the problem of interest in the
examples
folder - Download the data from Gen-Dynamics and place it in the
data
folder - Set its hyperparameters, and run the
main.py
script to both train and adapt the model to various environments. One can either run it in Notebook or script mode. We recommend usingnohup
to log the results:nohup python main.py > nohup.log &
- Once trained, move to the corresponding run folder saved in
runs
. Toggle thetrain
flag in themain.py
toFalse
. Rerunmain.py
to perform additional experiments such as uncertainty estimation, interpretability, etc.
The same main.py
in examples/lotka
can be used (relatively) unchanged for other problems.
- Fix the slow self-modulation slowdown issues
- Test the installation in neutral conda environments
If you use this work, please cite the corresponding paper:
@inproceedings{
nzoyem2025neural,
title={Neural Context Flows for Meta-Learning of Dynamical Systems},
author={Roussel Desmond Nzoyem and David A.W. Barton and Tom Deakin},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=8vzMLo8LDN}
}