diff --git a/contents/dave_sensors/Multibeam-Forward-Looking-Sonar.md b/contents/dave_sensors/Multibeam-Forward-Looking-Sonar.md index 5e6020f..3e8cd73 100644 --- a/contents/dave_sensors/Multibeam-Forward-Looking-Sonar.md +++ b/contents/dave_sensors/Multibeam-Forward-Looking-Sonar.md @@ -140,9 +140,74 @@ The model is based on a ray-based spatial discretization of the model facets, be *** # Installation -## Option A. Use Docker -The simplest way to prepare your machine with the CUDA library would be to use the Docker environment. Following commands include `-c`, which provides the Cuda library. + +The CUDA library depend on compatability support of NVIDIA drivers ([CUDA Compatability](https://docs.nvidia.com/deploy/cuda-compatibility/)). Also, old CUDA library versions are not officially supported for lower version of UBUNTU. + +Here, I've tested on Ubuntu 22.04, NVIDIA driver 535, CUDA 12.2. The host machine needs to have correct versions installed even using Docker. You can check your NVIDIA driver with `nvidia-smi` and CUDA version with `nvcc --version`. + +If you are on 22.04, go with Install on the Host option since the Docker could install other version of NVIDIA driver and CUDA. + +## Option A. Install on the Host + +### CUDA Library Installation +This plugin demands high computation costs. GPU parallelization is used with the Nvidia CUDA Library. A discrete NVIDIA Graphics card is required. + +* The most straightforward way to install CUDA support on Ubuntu 22.04 is: + * Install NVIDIA driver 545 at `additional drivers` on Ubuntu (Needs restart) + * Here's [one example with graphical and command-line options](https://linuxhint.com/update-nvidia-drivers-ubuntu-22-04-lts/). + + * Install CUDA 12.3 + ```bash + wget https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-keyring_1.1-1_all.deb + sudo dpkg -i cuda-keyring_1.1-1_all.deb + sudo apt update + # Make sure you are getting 12.3 versions + sudo apt-get install cuda-toolkit + ``` + * This installs the Nvidia CUDA toolkit from the Ubuntu repository. + + The final step is to add paths to the Cuda executables and libraries. + Add these lines to `.bashrc` to include them. You may resource it by `source ~/.bshrc` + + ```bash + export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}$ + export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} + ``` + +Once you are done, you will see something like the following msg with the `nvidia-smi` command. +```bash ++---------------------------------------------------------------------------------------+ +| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | +|-----------------------------------------+----------------------+----------------------+ +| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | +| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | +| | | MIG M. | +|=========================================+======================+======================| +| 0 Quadro RTX 3000 with Max... On | 00000000:01:00.0 Off | N/A | +| N/A 48C P0 22W / 65W | 606MiB / 6144MiB | 9% Default | +| | | N/A | ++-----------------------------------------+----------------------+----------------------+ + ++---------------------------------------------------------------------------------------+ +| Processes: | +| GPU GI CI PID Type Process name GPU Memory | +| ID ID Usage | +|=======================================================================================| +| 0 N/A N/A 1400 G /usr/lib/xorg/Xorg 4MiB | +| 0 N/A N/A 1993 C+G ...libexec/gnome-remote-desktop-daemon 596MiB | ++---------------------------------------------------------------------------------------+ ``` + +Also, check cuda version with `nvcc --version`. You would see version 12.3. + + +*** + +## Option B. Use Docker +This method assumes that you have followed [Use a Docker Image](/dave.doc/contents/installation/Docker-Development-Image) for system preparation. + +Following commands include `-c`, which provides the Cuda library pre-installed. +```bash # Install virtual environment (venv) for pip sudo apt-get install python3-venv @@ -167,6 +232,7 @@ git checkout cuda-dev # Build and run docker image (This may take up to 30 min to finish at the first time) ./build.bash noetic +# the new run.bash command with -c option ./run.bash -c dockwater:noetic ``` @@ -189,76 +255,11 @@ roslaunch nps_uw_multibeam_sonar sonar_tank_blueview_p900_nps_multibeam.launch # At new terminal window . ~/rocker_venv_cuda/bin/activate cd ~/rocker_venv_cuda/dockwater -./join.bash noetic_runtime +./join.bash dockwater_noetic_runtime cd ~/uuv_ws source devel/setup.bash ``` -## Option B. Install on the Host - -### CUDA Library Installation -This plugin demands high computation costs. GPU parallelization is used with the Nvidia CUDA Library. A discrete NVIDIA Graphics card is required. - -* The most straightforward way to install CUDA support on Ubuntu 20 is: - ``` - sudo apt update - sudo apt install nvidia-cuda-toolkit - ``` - * This installs the Nvidia CUDA toolkit from the Ubuntu repository. - * If you prefer to install the latest version directly from the CUDA repository, instructions are available here: https://linuxconfig.org/how-to-install-cuda-on-ubuntu-20-04-focal-fossa-linux - -**Install Cuda**. Install CUDA 11.1 on the host machine (Recommended installation method is to use local run file [download link](https://developer.nvidia.com/cuda-11.1.1-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal) -If you find a conflicting Nvidia driver, remove the previous driver and reinstall using a downloaded run file) -This installation file will install both CUDA 11.1 and the NVIDIA graphics driver 455.32, which is best compatible with CUDA 11.1 - -If you have already dealt with NVIDIA graphics driver at Ubuntu Software&Updates/Additional drivers to use proprietary drivers, revert it to use 'Using X.Org X server' to avoid 'The driver already installed, remove beforehand' kind of msg when you run the installation file. - -```bash -# Remove nvidia drivers -sudo apt remove nvidia-* -sudo apt autoremove -# Disable nouveau driver -sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf" -sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.conf" -sudo update-initramfs -u -# Reboot -sudo reboot -# Download the run file and run -# wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run -sudo sh cuda_11.1.1_455.32.00_linux.run -``` -Once you are done, you will see something like the following msg with the `nvidia-smi` command. -``` -+-----------------------------------------------------------------------------+ -| NVIDIA-SMI 455.32.00 Driver Version: 455.32.00 CUDA Version: 11.1 | -|-------------------------------+----------------------+----------------------+ -| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | -| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | -| | | MIG M. | -|===============================+======================+======================| -| 0 GeForce GTX 105... Off | 00000000:01:00.0 Off | N/A | -| N/A 42C P8 N/A / N/A | 7MiB / 4040MiB | 0% Default | -| | | N/A | -+-------------------------------+----------------------+----------------------+ - -+-----------------------------------------------------------------------------+ -| Processes: | -| GPU GI CI PID Type Process name GPU Memory | -| ID ID Usage | -|=============================================================================| -| No running processes found | -+-----------------------------------------------------------------------------+ - -``` -The final step is to add paths to the Cuda executables and libraries. -Add these lines to `.bashrc` to include them. You may resource it by `source ~/.bshrc` -``` -export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}$ -export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} -``` - -*** - ### Clone Repositories ### Option A. Using vcs tool @@ -266,7 +267,7 @@ export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PAT cd ~/uuv_ws/src vcs import --skip-existing --input dave/extras/repos/multibeam_sim.repos . ``` -vcs will use `multibeam_sim.repos` file inthe dave repository that includes both `nps_uw_multibeam_sonar` and `hydrographic_msgs`. +vcs will use `multibeam_sim.repos` file inthe dave repository that includes both `nps_uw_multibeam_sonar` and `marine_msgs`. ### Option B. Git clone manually #### Multibeam sonar plugin repository @@ -277,9 +278,9 @@ git clone https://github.com/Field-Robotics-Lab/nps_uw_multibeam_sonar.git ``` #### Acoustic message repository -Final results are exported as a [ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/ProjectedSonarImage.msg) of UW APL's sonar image msg format. Make sure to include the repository on the workspace before compiling. +Final results are exported as a [ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/marine_msgs/blob/main/marine_acoustic_msgs/msg/ProjectedSonarImage.msg) of UW APL's sonar image msg format. Make sure to include the repository on the workspace before compiling. ``` -git clone https://github.com/apl-ocean-engineering/hydrographic_msgs.git +git clone https://github.com/apl-ocean-engineering/marine_msgs.git ``` @@ -336,7 +337,7 @@ There are two types of multibeam sonar plugin in the repository. Raster version ## Gazebo Coordinate Frames -The plugin outputs sonar data using the [acoustic_msgs/ProjectedSonarImage](https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/ProjectedSonarImage.msg) ROS message. This message defines the bearing of each sonar beam as a rotation around a **downward-pointing** axis, such that negative bearings are to port of forward and positive to starboard (if the sonar is installed in it"s "typical" forward-looking orientation). +The plugin outputs sonar data using the [marine_acoustic_msgs/ProjectedSonarImage](https://github.com/apl-ocean-engineering/marine_msgs/blob/main/marine_acoustic_msgs/msg/ProjectedSonarImage.msg) ROS message. This message defines the bearing of each sonar beam as a rotation around a **downward-pointing** axis, such that negative bearings are to port of forward and positive to starboard (if the sonar is installed in it"s "typical" forward-looking orientation). The plugin will use the Gazebo frame name as the `frame_id` in the ROS message. For the sonar data to re-project correctly into 3D space, it **must** be attached to an X-Forward, Y-Starboard, Z-Down frame in Gazebo. @@ -438,8 +439,8 @@ Calculation settings including Ray skips, Max distance, writeLog/interval, Debug - Use following `docker cp` command at another terminal window ```bash - docker cp noetic_runtime:/tmp/SonarRawData_000001.csv . - docker cp noetic_runtime:/tmp/SonarRawData_beam_angles.csv . + docker cp dockwater_noetic_runtime:/tmp/SonarRawData_000001.csv . + docker cp dockwater_noetic_runtime:/tmp/SonarRawData_beam_angles.csv . ``` - Plotting scripts @@ -574,7 +575,7 @@ The final output of the sonar image is sent in two types. - This is a msg used internally to plot using with `image_view` package of ROS. - The data is generated using OpenCV's `CV_8UC1` format, normalized with `cv::NORM_MINMAX`, colorized with `cv::COLORMAP_HOT`, and changed into msg format using `BGR8` format - Topic name `sonar_image_raw` - - This is a msg matched with [UW APL's ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/hydrographic_msgs/blob/main/acoustic_msgs/msg/ProjectedSonarImage.msg#L5). + - This is a msg matched with [UW APL's ProjectedSonarImage.msg](https://github.com/apl-ocean-engineering/marine_msgs/blob/main/marine_acoustic_msgs/msg/ProjectedSonarImage.msg#L5). - The data is in `uint8`. #### Rviz Sonar Image Viewer Plugin diff --git a/contents/installation/Build-Dave-Environment.md b/contents/installation/Build-Dave-Environment.md index 468d992..a589228 100644 --- a/contents/installation/Build-Dave-Environment.md +++ b/contents/installation/Build-Dave-Environment.md @@ -10,14 +10,24 @@ Now that you've set up your development environment and obtained the source code Here, [catkin_tools](https://catkin-tools.readthedocs.io/en/latest/installing.html) is used to build the project. It compiles in parallel using number of cores in the machine. It supports all the options of `catkin_make` and can be used as a replacement for `catkin_make` in most cases as it is a drop-in replacement for `catkin_make`. +First install, catkin tools to use `catkin build` + +- If using Docker, you don't need this part. You already have `catkin build` available. + ```bash # Install build tool catkin_tools pip3 install -U catkin_tools # Optionally, you can configure to install the packages catkin config --install +``` + +Then build the source code (this may take upto about 10 minutes) -# Build (this may take upto about 10 minutes) +- If using Docker, below commands should be typed inside the docker environment after `./run.bash dockwater:noetic` or `./run.bash -c dockwater:noetic`. + +```bash +cd ~/uuv_ws catkin build ``` diff --git a/contents/installation/Install-Directly-on-Host.md b/contents/installation/Install-Directly-on-Host.md index 8948a80..80da784 100644 --- a/contents/installation/Install-Directly-on-Host.md +++ b/contents/installation/Install-Directly-on-Host.md @@ -8,7 +8,7 @@ parent: Installation This tutorial will walk you through the setup required to make a host machine ready to build and run the Dave simulations. Note that: * your host will need to satisfy the minimum [System Requirements](/dave.doc/contents/installation/System-Requirements), and -* the steps below assume you are running **Ubuntu 20.04**. +* the steps below assume you are running **Ubuntu 20.04**. If you have **Ubuntu 22.04**, follow [Install on Ubuntu 22.04 Jammy][/dave.doc/contents/installation/Install-on-UbuntuJammy.md] ## Install all dependencies Upgrade to the latest packages: diff --git a/contents/installation/Install-on-UbuntuJammy.md b/contents/installation/Install-on-UbuntuJammy.md new file mode 100644 index 0000000..50d2694 --- /dev/null +++ b/contents/installation/Install-on-UbuntuJammy.md @@ -0,0 +1,95 @@ +--- +last_modified_date: 26/02/2022 +layout: default +title: Install on Ubuntu 22.04 Jammy +nav_order: 2 +parent: Installation +--- + +This tutorial is to install directly on **Ubuntu 22.04 Jammy**. + +## Install ROS Noetic on Ubuntu 22.04 [Ref](https://github.com/tinkerfuroc/ros_noetic_on_jammy) + +Since the ROS Noetic only officialy support Ubuntu 20.04, we need to add ROS 2 source. + + +```bash +sudo apt-get update && sudo apt-get install -y curl +sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg +echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null +sudo apt-get update +``` + +Install basic dependencies + +```bash +sudo apt-get install -y python3-pip python3-rosdep python3-rosinstall-generator python3-vcstools python3-vcstool build-essential python3-numpy +sudo pip3 install -U rosdep rosinstall_generator vcstool +sudo pip3 install --upgrade setuptools +sudo apt-get install -y build-essential +sudo apt-get install -y cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libfltk1.3-dev +``` + +Prepare rosdep and installation workspace + +```bash +# Initiate Rosdep +rosdep init +rosdep update +# Make workspace +mkdir ~/ros_catkin_ws +cd ~/ros_catkin_ws +# Download source script +rosinstall_generator desktop_full --rosdistro noetic --deps --tar > noetic-desktop-full.rosinstall +# Download sources from source script +mkdir ./src +vcs import --input noetic-desktop-full.rosinstall ./src +``` + +Patch source code and install dependencies + +```bash +# Patch source +sed -i -e s/"hddtemp<\/run_depend>"/"<\!-- hddtemp<\/run_depend> -->"/g ./src/diagnostics/diagnostic_common_diagnostics/package.xml +# Install dependencies with rosdep +rosdep install --from-paths ./src --ignore-packages-from-source --rosdistro noetic -y +``` + +Patch source code to compile + +```bash +sed -i -e s/"COMPILER_SUPPORTS_CXX11"/"COMPILER_SUPPORTS_CXX17"/g ./src/geometry/tf/CMakeLists.txt +sed -i -e s/"c++11"/"c++17"/g ./src/geometry/tf/CMakeLists.txt +sed -i -e s/"CMAKE_CXX_STANDARD 14"/"CMAKE_CXX_STANDARD 17"/g ./src/kdl_parser/kdl_parser/CMakeLists.txt +sed -i -e s/"CMAKE_CXX_STANDARD 11"/"CMAKE_CXX_STANDARD 17"/g ./src/laser_geometry/CMakeLists.txt +sed -i -e s/"c++11"/"c++17"/g ./src/resource_retriever/CMakeLists.txt +sed -i -e s/"COMPILER_SUPPORTS_CXX11"/"COMPILER_SUPPORTS_CXX17"/g ./src/robot_state_publisher/CMakeLists.txt +sed -i -e s/"c++11"/"c++17"/g ./src/robot_state_publisher/CMakeLists.txt +sed -i -e s/"c++11"/"c++17"/g ./src/rqt_image_view/CMakeLists.txt +sed -i -e s/"CMAKE_CXX_STANDARD 14"/"CMAKE_CXX_STANDARD 17"/g ./src/urdf/urdf/CMakeLists.txt + +sed -i -e s/"CMAKE_CXX_STANDARD 14"/"CMAKE_CXX_STANDARD 17"/g ./src/perception_pcl/pcl_ros/CMakeLists.txt +sed -i -e s/"c++14"/"c++17"/g ./src/perception_pcl/pcl_ros/CMakeLists.txt +sed -i -e s/"CMAKE_CXX_STANDARD 11"/"CMAKE_CXX_STANDARD 17"/g ./src/laser_filters/CMakeLists.txt +``` + +Replace rosconsole + +```bash +rm -rf ./src/rosconsole +cd src +git clone https://github.com/tatsuyai713/rosconsole +cd .. +``` + +Build (This could take upto 30 minutes) + +```bash +./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release +``` + +Source it to use it. You may add this at `~/.bashrc` to set it as default. + +```bash +source ~/ros_catkin_ws/install_isolated/setup.bash +``` diff --git a/contents/installation/System-Requirements.md b/contents/installation/System-Requirements.md index 68afba4..daa9add 100644 --- a/contents/installation/System-Requirements.md +++ b/contents/installation/System-Requirements.md @@ -17,7 +17,7 @@ parent: Installation ## Software - Recommended * Ubuntu Desktop 20.04 Focal (64-bit) - * If you're system is other than 20.04 (e.g. 22.04 Jammy Jellyfish), you can use Docker to run the Dave environment which will run 20.04 in docker environment. Check the Docker Requirements below and proceed. + * If you're system is other than 20.04 (e.g. 22.04 Jammy Jellyfish), you can use Docker to run the Dave environment which will run 20.04 in docker environment. Check the Docker Requirements below and proceed. Installing dicrectly on host requires much more work. - Legacy mode * Ubuntu Desktop 18.04 Bionic (64-bit) @@ -27,8 +27,9 @@ parent: Installation ### Nvidia Driver Our tutorials assume you have an Nvidia graphics card configured to use the proprietary driver. * There are many online guides for configuring the Nvidia driver correctly on Ubuntu. -* Here's [one example with graphical and command-line options](https://www.linuxbabe.com/ubuntu/install-nvidia-driver-ubuntu-18-04). +* Here's [one example with graphical and command-line options](https://linuxhint.com/update-nvidia-drivers-ubuntu-22-04-lts/). * You may test your driver installation by running `nvidia-smi` in a terminal. If the command is not found, the driver is not installed correctly. Also test 3D Rendering with `glxgears` which can be used after installing the `mesa-utils` package with `sudo apt install mesa-utils`. +* Last tested version for multibeam sonar is, `Ubuntu 22.04`, `NVIDIA nvidia-driver-535`, `CUDA 12.2`. The default install of Ubuntu 22.04 which can be chcked with `nvidia-smi` command on the terminal. When you change NVIDIA driver, please restart. ### Docker Running Docker is optional but it's more trustworthy if you are not so familiar with installation process. If you choose to use the container-based installation instructions, the following are required: @@ -37,8 +38,10 @@ Running Docker is optional but it's more trustworthy if you are not so familiar * To use NVIDIA GPUs with Docker, you will need to install the following: * nvidia-container-toolkit ([installation instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)) * We are using Docker as a container runtime engine + * Make sure to restart docker after installation of nvidia-container-toolkit `sudo systemctl restart docker` + ``` * Check [Dockwater prerequisites NVIDIA driver versions](https://github.com/osrf/rocker#nvidia-settings) to see which NVIDIA driver versions are supported for your system Ubuntu versions. - * Tested with Ubuntu 22.04 6.2.0-37-generic with nvidia-535 driver worked fine + * Tested with Docker Installation method at Ubuntu 22.04 6.2.0-37-generic with nvidia-535 driver worked fine ## Peripherals We also recommend a gamepad for testing the UUV and its arm and sensor devices. In the examples we use a Logitech F310 ([Walmart](https://www.walmart.com/ip/Logitech-F310-GamePad/16419686)). \ No newline at end of file diff --git a/contents/integrated_world/index.md b/contents/integrated_world/index.md index a9d0fd3..e60b886 100644 --- a/contents/integrated_world/index.md +++ b/contents/integrated_world/index.md @@ -32,7 +32,7 @@ Lastly, featuring multiple UUVs teleoperated using separate joysticks and arms c # Usage To launch the demo world - +- This takes quite some time to launch to load since it needs to download models from [Ignition Fuel](https://app.gazebosim.org/dashboard) online model library ```bash roslaunch dave_demo_launch dave_integrated_demo.launch ```