Skip to content

Commit

Permalink
supporting 22.04
Browse files Browse the repository at this point in the history
  • Loading branch information
woensug-choi committed Dec 4, 2023
1 parent fb7758f commit 7f172ee
Show file tree
Hide file tree
Showing 6 changed files with 190 additions and 81 deletions.
151 changes: 76 additions & 75 deletions contents/dave_sensors/Multibeam-Forward-Looking-Sonar.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,9 +140,74 @@ The model is based on a ray-based spatial discretization of the model facets, be
***

# Installation
## Option A. Use Docker
The simplest way to prepare your machine with the CUDA library would be to use the Docker environment. Following commands include `-c`, which provides the Cuda library.

The CUDA library depend on compatability support of NVIDIA drivers ([CUDA Compatability](https://docs.nvidia.com/deploy/cuda-compatibility/)). Also, old CUDA library versions are not officially supported for lower version of UBUNTU.

Here, I've tested on Ubuntu 22.04, NVIDIA driver 535, CUDA 12.2. The host machine needs to have correct versions installed even using Docker. You can check your NVIDIA driver with `nvidia-smi` and CUDA version with `nvcc --version`.

If you are on 22.04, go with Install on the Host option since the Docker could install other version of NVIDIA driver and CUDA.

## Option A. Install on the Host

### CUDA Library Installation
This plugin demands high computation costs. GPU parallelization is used with the Nvidia CUDA Library. A discrete NVIDIA Graphics card is required.

* The most straightforward way to install CUDA support on Ubuntu 22.04 is:
* Install NVIDIA driver 545 at `additional drivers` on Ubuntu (Needs restart)
* Here's [one example with graphical and command-line options](https://linuxhint.com/update-nvidia-drivers-ubuntu-22-04-lts/).

* Install CUDA 12.3
```bash
wget https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
# Make sure you are getting 12.3 versions
sudo apt-get install cuda-toolkit
```
* This installs the Nvidia CUDA toolkit from the Ubuntu repository.

The final step is to add paths to the Cuda executables and libraries.
Add these lines to `.bashrc` to include them. You may resource it by `source ~/.bshrc`

```bash
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}$
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
```

Once you are done, you will see something like the following msg with the `nvidia-smi` command.
```bash
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Quadro RTX 3000 with Max... On | 00000000:01:00.0 Off | N/A |
| N/A 48C P0 22W / 65W | 606MiB / 6144MiB | 9% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1400 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 1993 C+G ...libexec/gnome-remote-desktop-daemon 596MiB |
+---------------------------------------------------------------------------------------+
```

Also, check cuda version with `nvcc --version`. You would see version 12.3.


***

## Option B. Use Docker
This method assumes that you have followed [Use a Docker Image](/dave.doc/contents/installation/Docker-Development-Image) for system preparation.

Following commands include `-c`, which provides the Cuda library pre-installed.
```bash
# Install virtual environment (venv) for pip
sudo apt-get install python3-venv
Expand All @@ -167,6 +232,7 @@ git checkout cuda-dev
# Build and run docker image (This may take up to 30 min to finish at the first time)
./build.bash noetic
# the new run.bash command with -c option
./run.bash -c dockwater:noetic
```

Expand All @@ -189,84 +255,19 @@ roslaunch nps_uw_multibeam_sonar sonar_tank_blueview_p900_nps_multibeam.launch
# At new terminal window
. ~/rocker_venv_cuda/bin/activate
cd ~/rocker_venv_cuda/dockwater
./join.bash noetic_runtime
./join.bash dockwater_noetic_runtime
cd ~/uuv_ws
source devel/setup.bash
```
## Option B. Install on the Host

### CUDA Library Installation
This plugin demands high computation costs. GPU parallelization is used with the Nvidia CUDA Library. A discrete NVIDIA Graphics card is required.

* The most straightforward way to install CUDA support on Ubuntu 20 is:
```
sudo apt update
sudo apt install nvidia-cuda-toolkit
```
* This installs the Nvidia CUDA toolkit from the Ubuntu repository.
* If you prefer to install the latest version directly from the CUDA repository, instructions are available here: https://linuxconfig.org/how-to-install-cuda-on-ubuntu-20-04-focal-fossa-linux
**Install Cuda**. Install CUDA 11.1 on the host machine (Recommended installation method is to use local run file [download link](https://developer.nvidia.com/cuda-11.1.1-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal)
If you find a conflicting Nvidia driver, remove the previous driver and reinstall using a downloaded run file)
This installation file will install both CUDA 11.1 and the NVIDIA graphics driver 455.32, which is best compatible with CUDA 11.1
If you have already dealt with NVIDIA graphics driver at Ubuntu Software&Updates/Additional drivers to use proprietary drivers, revert it to use 'Using X.Org X server' to avoid 'The driver already installed, remove beforehand' kind of msg when you run the installation file.
```bash
# Remove nvidia drivers
sudo apt remove nvidia-*
sudo apt autoremove
# Disable nouveau driver
sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
sudo update-initramfs -u
# Reboot
sudo reboot
# Download the run file and run
# wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run
sudo sh cuda_11.1.1_455.32.00_linux.run
```
Once you are done, you will see something like the following msg with the `nvidia-smi` command.
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.32.00 Driver Version: 455.32.00 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... Off | 00000000:01:00.0 Off | N/A |
| N/A 42C P8 N/A / N/A | 7MiB / 4040MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
The final step is to add paths to the Cuda executables and libraries.
Add these lines to `.bashrc` to include them. You may resource it by `source ~/.bshrc`
```
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}$
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
```

***

### Clone Repositories
### Option A. Using vcs tool
```
cd ~/uuv_ws/src
vcs import --skip-existing --input dave/extras/repos/multibeam_sim.repos .
```
vcs will use `multibeam_sim.repos` file inthe dave repository that includes both `nps_uw_multibeam_sonar` and `hydrographic_msgs`.
vcs will use `multibeam_sim.repos` file inthe dave repository that includes both `nps_uw_multibeam_sonar` and `marine_msgs`.
### Option B. Git clone manually
#### Multibeam sonar plugin repository
Expand All @@ -277,9 +278,9 @@ git clone https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar.git
```
#### Acoustic message repository
Final results are exported as a [ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/ProjectedSonarImage.msg) of UW APL's sonar image msg format. Make sure to include the repository on the workspace before compiling.
Final results are exported as a [ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/marine_msgs/blob/main/marine_acoustic_msgs/msg/ProjectedSonarImage.msg) of UW APL's sonar image msg format. Make sure to include the repository on the workspace before compiling.
```
git clone https://github.com/apl-ocean-engineering/hydrographic_msgs.git
git clone https://github.com/apl-ocean-engineering/marine_msgs.git
```
Expand Down Expand Up @@ -336,7 +337,7 @@ There are two types of multibeam sonar plugin in the repository. Raster version

## Gazebo Coordinate Frames

The plugin outputs sonar data using the [acoustic_msgs/ProjectedSonarImage](https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/ProjectedSonarImage.msg) ROS message. This message defines the bearing of each sonar beam as a rotation around a **downward-pointing** axis, such that negative bearings are to port of forward and positive to starboard (if the sonar is installed in it"s "typical" forward-looking orientation).
The plugin outputs sonar data using the [marine_acoustic_msgs/ProjectedSonarImage](https://github.com/apl-ocean-engineering/marine_msgs/blob/main/marine_acoustic_msgs/msg/ProjectedSonarImage.msg) ROS message. This message defines the bearing of each sonar beam as a rotation around a **downward-pointing** axis, such that negative bearings are to port of forward and positive to starboard (if the sonar is installed in it"s "typical" forward-looking orientation).

The plugin will use the Gazebo frame name as the `frame_id` in the ROS message. For the sonar data to re-project correctly into 3D space, it **must** be attached to an X-Forward, Y-Starboard, Z-Down frame in Gazebo.

Expand Down Expand Up @@ -438,8 +439,8 @@ Calculation settings including Ray skips, Max distance, writeLog/interval, Debug
- Use following `docker cp` command at another terminal window

```bash
docker cp noetic_runtime:/tmp/SonarRawData_000001.csv .
docker cp noetic_runtime:/tmp/SonarRawData_beam_angles.csv .
docker cp dockwater_noetic_runtime:/tmp/SonarRawData_000001.csv .
docker cp dockwater_noetic_runtime:/tmp/SonarRawData_beam_angles.csv .
```

- Plotting scripts
Expand Down Expand Up @@ -574,7 +575,7 @@ The final output of the sonar image is sent in two types.
- This is a msg used internally to plot using with `image_view` package of ROS.
- The data is generated using OpenCV's `CV_8UC1` format, normalized with `cv::NORM_MINMAX`, colorized with `cv::COLORMAP_HOT`, and changed into msg format using `BGR8` format
- Topic name `sonar_image_raw`
- This is a msg matched with [UW APL's ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/ProjectedSonarImage.msg#L5).
- This is a msg matched with [UW APL's ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/marine_msgs/blob/main/marine_acoustic_msgs/msg/ProjectedSonarImage.msg#L5).
- The data is in `uint8`.
#### Rviz Sonar Image Viewer Plugin
Expand Down
12 changes: 11 additions & 1 deletion contents/installation/Build-Dave-Environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,24 @@ Now that you've set up your development environment and obtained the source code

Here, [catkin_tools](https://catkin-tools.readthedocs.io/en/latest/installing.html) is used to build the project. It compiles in parallel using number of cores in the machine. It supports all the options of `catkin_make` and can be used as a replacement for `catkin_make` in most cases as it is a drop-in replacement for `catkin_make`.

First install, catkin tools to use `catkin build`

- If using Docker, you don't need this part. You already have `catkin build` available.

```bash
# Install build tool catkin_tools
pip3 install -U catkin_tools

# Optionally, you can configure to install the packages
catkin config --install
```

Then build the source code (this may take upto about 10 minutes)

# Build (this may take upto about 10 minutes)
- If using Docker, below commands should be typed inside the docker environment after `./run.bash dockwater:noetic` or `./run.bash -c dockwater:noetic`.

```bash
cd ~/uuv_ws
catkin build
```

Expand Down
2 changes: 1 addition & 1 deletion contents/installation/Install-Directly-on-Host.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ parent: Installation

This tutorial will walk you through the setup required to make a host machine ready to build and run the Dave simulations. Note that:
* your host will need to satisfy the minimum [System Requirements](/dave.doc/contents/installation/System-Requirements), and
* the steps below assume you are running **Ubuntu 20.04**.
* the steps below assume you are running **Ubuntu 20.04**. If you have **Ubuntu 22.04**, follow [Install on Ubuntu 22.04 Jammy][/dave.doc/contents/installation/Install-on-UbuntuJammy.md]

## Install all dependencies
Upgrade to the latest packages:
Expand Down
95 changes: 95 additions & 0 deletions contents/installation/Install-on-UbuntuJammy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
last_modified_date: 26/02/2022
layout: default
title: Install on Ubuntu 22.04 Jammy
nav_order: 2
parent: Installation
---

This tutorial is to install directly on **Ubuntu 22.04 Jammy**.

## Install ROS Noetic on Ubuntu 22.04 [Ref](https://github.com/tinkerfuroc/ros_noetic_on_jammy)

Since the ROS Noetic only officialy support Ubuntu 20.04, we need to add ROS 2 source.


```bash
sudo apt-get update && sudo apt-get install -y curl
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null
sudo apt-get update
```

Install basic dependencies

```bash
sudo apt-get install -y python3-pip python3-rosdep python3-rosinstall-generator python3-vcstools python3-vcstool build-essential python3-numpy
sudo pip3 install -U rosdep rosinstall_generator vcstool
sudo pip3 install --upgrade setuptools
sudo apt-get install -y build-essential
sudo apt-get install -y cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libfltk1.3-dev
```

Prepare rosdep and installation workspace

```bash
# Initiate Rosdep
rosdep init
rosdep update
# Make workspace
mkdir ~/ros_catkin_ws
cd ~/ros_catkin_ws
# Download source script
rosinstall_generator desktop_full --rosdistro noetic --deps --tar > noetic-desktop-full.rosinstall
# Download sources from source script
mkdir ./src
vcs import --input noetic-desktop-full.rosinstall ./src
```

Patch source code and install dependencies

```bash
# Patch source
sed -i -e s/"<run_depend>hddtemp<\/run_depend>"/"<\!-- <run_depend>hddtemp<\/run_depend> -->"/g ./src/diagnostics/diagnostic_common_diagnostics/package.xml
# Install dependencies with rosdep
rosdep install --from-paths ./src --ignore-packages-from-source --rosdistro noetic -y
```

Patch source code to compile

```bash
sed -i -e s/"COMPILER_SUPPORTS_CXX11"/"COMPILER_SUPPORTS_CXX17"/g ./src/geometry/tf/CMakeLists.txt
sed -i -e s/"c++11"/"c++17"/g ./src/geometry/tf/CMakeLists.txt
sed -i -e s/"CMAKE_CXX_STANDARD 14"/"CMAKE_CXX_STANDARD 17"/g ./src/kdl_parser/kdl_parser/CMakeLists.txt
sed -i -e s/"CMAKE_CXX_STANDARD 11"/"CMAKE_CXX_STANDARD 17"/g ./src/laser_geometry/CMakeLists.txt
sed -i -e s/"c++11"/"c++17"/g ./src/resource_retriever/CMakeLists.txt
sed -i -e s/"COMPILER_SUPPORTS_CXX11"/"COMPILER_SUPPORTS_CXX17"/g ./src/robot_state_publisher/CMakeLists.txt
sed -i -e s/"c++11"/"c++17"/g ./src/robot_state_publisher/CMakeLists.txt
sed -i -e s/"c++11"/"c++17"/g ./src/rqt_image_view/CMakeLists.txt
sed -i -e s/"CMAKE_CXX_STANDARD 14"/"CMAKE_CXX_STANDARD 17"/g ./src/urdf/urdf/CMakeLists.txt

sed -i -e s/"CMAKE_CXX_STANDARD 14"/"CMAKE_CXX_STANDARD 17"/g ./src/perception_pcl/pcl_ros/CMakeLists.txt
sed -i -e s/"c++14"/"c++17"/g ./src/perception_pcl/pcl_ros/CMakeLists.txt
sed -i -e s/"CMAKE_CXX_STANDARD 11"/"CMAKE_CXX_STANDARD 17"/g ./src/laser_filters/CMakeLists.txt
```

Replace rosconsole

```bash
rm -rf ./src/rosconsole
cd src
git clone https://github.com/tatsuyai713/rosconsole
cd ..
```

Build (This could take upto 30 minutes)

```bash
./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release
```

Source it to use it. You may add this at `~/.bashrc` to set it as default.

```bash
source ~/ros_catkin_ws/install_isolated/setup.bash
```
Loading

0 comments on commit 7f172ee

Please sign in to comment.