Skip to content

Simulator for ASL VI-Sensor using the RotorS simulator and Blender

Notifications You must be signed in to change notification settings

VIS4ROB-lab/visensor_simulator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

23b03bd · Jun 10, 2024

History

96 Commits
Aug 11, 2018
Sep 4, 2020
Aug 11, 2018
Jan 29, 2020
Dec 9, 2019
Aug 6, 2018
Aug 11, 2018
Jun 10, 2024
Aug 6, 2018
Aug 3, 2018

Repository files navigation

VI-Sensor Simulator

The simulation of the VI-Sensor. No Guarantees.

This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas Teixeira, Martin R. Oswald, Marc Pollefeys, Margarita Chli, published in the IEEE Robotics and Automation Letters (RA-L) IEEE link.

Video:

Mesh

Citations:

If you use this code for research, please cite the following publication:

@article{Teixeira:etal:RAL2020,
    title   = {{Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation}},
    author  = {Lucas Teixeira and Martin R. Oswald and Marc Pollefeys and Margarita Chli},
    journal = {{IEEE} Robotics and Automation Letters ({RA-L})},
    doi     = {10.1109/LRA.2020.2967296},
    year    = {2020}
}

License

There is copyright code from other libraries. Mostly commented in the source code. For our original code BSD license applies.

Installation

  $ mkdir -p ~/catkin_ws/src
  $ cd ~/catkin_ws  
  $ source /opt/ros/melodic/setup.bash
  $ catkin init  # initialize your catkin workspace
  $ catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
  $ catkin config --merge-devel
  • Get the simulator and dependencies
  $ cd ~/catkin_ws/src
  $ sudo apt-get install liblapacke-dev python-wstool python-catkin-tools protobuf-compiler libgoogle-glog-dev libopenexr-dev libatlas-base-dev libeigen3-dev libsuitesparse-dev
  $ sudo apt-get install ros-melodic-joy ros-melodic-octomap-ros
  $ git clone git@github.com:catkin/catkin_simple
  $ git clone git@github.com:ethz-asl/rotors_simulator
  $ git clone git@github.com:ethz-asl/mav_comm
  $ git clone git@github.com:ethz-asl/eigen_catkin
  $ git clone git@github.com:ethz-asl/glog_catkin
  $ git clone git@github.com:ethz-asl/mav_control_rw
  $ pip install OpenEXR
  
  $ git clone git@github.com:VIS4ROB-lab/visensor_simulator.git -b blender_2.8  

  • Build the workspace
  $ catkin build

Step-by-step

  1. Create a project - this is a folder with any name. Configure the cameras and the waypoints In the folder resources there is a example "project_test.tar"

  2. starts gazebo

  $ roslaunch visensor_simulator uav_vi_blender.launch
  1. run the back_end:
  $ roslaunch visensor_simulator ros_backend.launch project_folder:="/home/lucas/data/test/project_testA"
  1. open your scene on blender, select the camera, file->import->VISensor Simulator Project(*json), choose the file visim_project.json on your project.

  2. Render. Quick render is faster, but it is less realistic.

  3. run the bagcreator(namespace is optional):

  $ rosrun visensor_simulator visensor_sim_bagcreator.py --output_bag your_output.bag --project_folder "/home/lucas/data/test/project_testA" --namespace "firefly"

Roadmap

  • write a camera path exporter compatible with our waypoint planner, better if we introduce this on our dataset format
  • change from firefly to neo11
  • develop a software to build a simplified version of the world to allow collision on the simulation. BVH and Octomap are options
  • better error msg on the bagcreator when it is not possible to create the bagfile
  • expose the simple_planner waypoint tolerance as ros parameters
  • write my owm spawn with noise and vi_sensor pose as parameters
  • add imu name on the json file
  • autoselect a camera from the json
  • add option to disable the simple planner
  • detect incomplete render sequence and jump to the latest one (support in case of shutdown)
  • make everything relative to the project file instead of the project folder
  • add test for topic names on the bagcreator