- language-agnostic PyTorch model serving
- serve JIT compiled PyTorch model in production environment
- docker == 18.09.1
- wget == 1.20.1
- your JIT traced PyTorch model (If you are not familiar with JIT tracing, please refer JIT Tutorial)
- run
bridge
request to the model server as follow (suppose your input dimension is 3)
curl -X POST -d '{"input":[1.0, 1.0, 1.0]}' localhost:8080/model/predict
- YongRae Jo (dreamgonfly@gmail.com)
- YoonHo Jo (cloudjo21@gmail.com)
- GiChang Lee (new.ratsgo@gmail.com)
- Seunghwan Hong
- SeungHyek Cho
- Alex Kim (hyoungseok.k@gmail.com)