Version 1.0.16 Release
YasunariZHashimoto
released this
19 Apr 06:59
·
944 commits
to master
since this release
- Add numeric include
- Fix numpy requirements
- Add AdaBound & AMSBound
- Add tile function (c.f. numpy.tile or torch.repeat)
- Add CUDA implemention for random_choice function.
- Add TopKDataCuda and TopKGradCuda function implementations.
Install the latest nnabla by:
pip install nnabla
pip install nnabla_ext_cuda # For CUDA users
Users with python <= 3.4 may experience errors with pip install nnabla
and pip install nnabla-ext-cuda
.
■ Workaround
Please install matplotlib == 2.2.3 and re-install nnabla, nnabla_ext_cuda.
pip install matplotlib==2.2.3
pip install nnabla
pip install nnabla_ext_cuda
Note that CUDA 9.2 and cuDNN 7.4 are set as default if versions are not specified. You can also install the cuda extension with specific versions from one of the following. See also FAQ
- nnabla-ext-cuda80 (CUDA 8.0 x cuDNN 7.1)
- nnabla-ext-cuda90 (CUDA 9.0 x cuDNN 7.5(win), 7.4(linux))
- nnabla-ext-cuda92 (CUDA 9.2 x cuDNN 7.5(win), 7.4(linux))
- nnabla-ext-cuda100 (CUDA 10.0 x cuDNN 7.5)
pip install nnabla
pip install nnabla_ext_cuda92 # For CUDA 9.2 x cuDNN 7.4 users
Additional setup may be required depending on your OS or environment. Please check Python Package Installation Guide for details.
To use C++ inference feature, follow the demonstration on MNIST inference in C++.
For distributed training, you need to build a binary from source. See the guide for building multi-GPU training package.