Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Auto annotation GPU support #2546

Merged
merged 13 commits into from
Dec 15, 2020
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,16 +61,16 @@ via its command line tool and Python library.

## Deep learning models for automatic labeling

| Name | Type | Framework |
| ------------------------------------------------------------------------------------------------------- | ---------- | ---------- |
| [Deep Extreme Cut](/serverless/openvino/dextr/nuclio) | interactor | OpenVINO |
| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow |
| [Mask RCNN](/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | detector | OpenVINO |
| [YOLO v3](/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | detector | OpenVINO |
| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO |
| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO |
| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow |
| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO |
| Name | Type | Framework | CPU | GPU |
| ------------------------------------------------------------------------------------------------------- | ---------- | ---------- | --- | --- |
| [Deep Extreme Cut](/serverless/openvino/dextr/nuclio) | interactor | OpenVINO | X |
| [Faster RCNN](/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | detector | TensorFlow | X | X |
| [Mask RCNN](/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | detector | OpenVINO | X |
| [YOLO v3](/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | detector | OpenVINO | X |
| [Text detection v4](/serverless/openvino/omz/intel/text-detection-0004/nuclio) | detector | OpenVINO | X |
| [Semantic segmentation for ADAS](/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | detector | OpenVINO | X |
| [Mask RCNN](/serverless/tensorflow/matterport/mask_rcnn/nuclio) | detector | TensorFlow | X |
| [Object reidentification](/serverless/openvino/omz/intel/person-reidentification-retail-300/nuclio) | reid | OpenVINO | X |

## Online demo: [cvat.org](https://cvat.org)

Expand Down
2 changes: 1 addition & 1 deletion components/serverless/docker-compose.serverless.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ version: '3.3'
services:
serverless:
container_name: nuclio
image: quay.io/nuclio/dashboard:1.4.8-amd64
image: quay.io/nuclio/dashboard:1.5.8-amd64
restart: always
networks:
default:
Expand Down
27 changes: 1 addition & 26 deletions cvat/apps/documentation/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -290,32 +290,7 @@ docker-compose -f docker-compose.yml -f components/analytics/docker-compose.anal

### Semi-automatic and automatic annotation

- You have to install `nuctl` command line tool to build and deploy serverless
functions. Download [the latest release](https://github.com/nuclio/nuclio/releases).
- Create `cvat` project inside nuclio dashboard where you will deploy new
serverless functions and deploy a couple of DL models. Commands below should
be run only after CVAT has been installed using docker-compose because it
runs nuclio dashboard which manages all serverless functions.

```bash
nuctl create project cvat
```

```bash
nuctl deploy --project-name cvat \
--path serverless/openvino/dextr/nuclio \
--volume `pwd`/serverless/openvino/common:/opt/nuclio/common \
--platform local
```

```bash
nuctl deploy --project-name cvat \
--path serverless/openvino/omz/public/yolo-v3-tf/nuclio \
--volume `pwd`/serverless/openvino/common:/opt/nuclio/common \
--platform local
```

Note: see [deploy.sh](/serverless/deploy.sh) script for more examples.
Please follow [instructions](/cvat/apps/documentation/installation_automatic_annotation.md)

### Stop all containers

Expand Down
91 changes: 91 additions & 0 deletions cvat/apps/documentation/installation_automatic_annotation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@

### Semi-automatic and Automatic Annotation


> **⚠ WARNING: Do not use `docker-compose up`**
> If you did, make sure all containers are stopped by `docker-compose down`.
- To bring up cvat with auto annotation tool, from cvat root directory, you need to run:
```bash
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d
```
If you did any changes to the docker-compose files, make sure to add `--build` at the end.

To stop the containers, simply run:

```bash
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml down
```

- You have to install `nuctl` command line tool to build and deploy serverless
functions. Download [version 1.5.8](https://github.com/nuclio/nuclio/releases).
It is important that the version you download matches the version in
[docker-compose.serverless.yml](/components/serverless/docker-compose.serverless.yml)
After downloading the nuclio, give it a proper permission and do a softlink
```
sudo chmod +x nuctl-<version>-linux-amd64
sudo ln -sf $(pwd)/nuctl-<version>-linux-amd64 /usr/local/bin/nuctl
```

- Create `cvat` project inside nuclio dashboard where you will deploy new serverless functions and deploy a couple of DL models. Commands below should be run only after CVAT has been installed using `docker-compose` because it runs nuclio dashboard which manages all serverless functions.

```bash
nuctl create project cvat
```

```bash
nuctl deploy --project-name cvat \
--path serverless/openvino/dextr/nuclio \
--volume `pwd`/serverless/openvino/common:/opt/nuclio/common \
--platform local
```

```bash
nuctl deploy --project-name cvat \
--path serverless/openvino/omz/public/yolo-v3-tf/nuclio \
--volume `pwd`/serverless/openvino/common:/opt/nuclio/common \
--platform local
```
**Note:**
- See [deploy_cpu.sh](/serverless/deploy_cpu.sh) for more examples.

#### GPU Support
You will need to install Nvidia Container Toolkit and make sure your docker supports GPU. Follow [Nvidia docker instructions](https://www.tensorflow.org/install/docker#gpu_support).
Also you will need to add `--resource-limit nvidia.com/gpu=1` to the nuclio deployment command.
As an example, below will run on the GPU:

```bash
nuctl deploy tf-faster-rcnn-inception-v2-coco-gpu \
--project-name cvat --path "serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio" --platform local \
--base-image tensorflow/tensorflow:2.1.1-gpu \
--desc "Faster RCNN from Tensorflow Object Detection GPU API" \
--image cvat/tf.faster_rcnn_inception_v2_coco_gpu \
--resource-limit nvidia.com/gpu=1
```

**Note:**
- Since the model is loaded during deployment, the number of GPU functions you can deploy will be limited to your GPU memory.

- See [deploy_gpu.sh](/serverless/deploy_gpu.sh) script for more examples.

####Debugging Nuclio Functions:

- You can open nuclio dashboard at [localhost:8070](http://localhost:8070). Make sure status of your functions are up and running without any error.

- To check for internal server errors, run `docker ps -a` to see the list of containers. Find the container that you are interested, e.g. `nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu`. Then check its logs by

```bash
docker logs <name of your container>
```
e.g.,

```bash
docker logs nuclio-nuclio-tf-faster-rcnn-inception-v2-coco-gpu
```

- If you would like to debug a code inside a container, you can use vscode to directly attach to a container [instructions](https://code.visualstudio.com/docs/remote/attach-container). To apply your changes, make sure to restart the container.
```bash
docker restart <name_of_the_container>
```

> **⚠ WARNING:**
> Do not use nuclio dashboard to stop the container because with any modifications, it rebuilds the container and you will lose your changes.
1 change: 1 addition & 0 deletions serverless/deploy.sh → serverless/deploy_cpu.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
#!/bin/bash
# Sample commands to deploy nuclio functions on CPU

SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

Expand Down
15 changes: 15 additions & 0 deletions serverless/deploy_gpu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
#!/bin/bash
# Sample commands to deploy nuclio functions on GPU

SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

nuctl create project cvat

nuctl deploy --project-name cvat \
--path "$SCRIPT_DIR/tensorflow/faster_rcnn_inception_v2_coco/nuclio" \
--platform local --base-image tensorflow/tensorflow:2.1.1-gpu \
--desc "Faster RCNN from Tensorflow Object Detection GPU API" \
--image cvat/tf.faster_rcnn_inception_v2_coco_gpu \
--resource-limit nvidia.com/gpu=1

nuctl get function
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ def infer(self, image):
width, height = image.size
if width > 1920 or height > 1080:
image = image.resize((width // 2, height // 2), Image.ANTIALIAS)
image_np = np.array(image.getdata()).reshape((image.height, image.width, 3)).astype(np.uint8)
image_np = np.array(image.getdata())[:, :3].reshape(
(image.height, image.width, -1)).astype(np.uint8)
image_np = np.expand_dims(image_np, axis=0)

return self.session.run(
Expand Down