Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/point pillars #2029

Merged
merged 39 commits into from
Mar 8, 2019
Merged

Feature/point pillars #2029

merged 39 commits into from
Mar 8, 2019

Conversation

k0suke-murakami
Copy link

@k0suke-murakami k0suke-murakami commented Feb 26, 2019

Status

DEVELOPMENT

Description

image
Detection results are visualized through a nice "KITTI Viewer Web" from @traveller59 repo.

Implemented new feature PointPillars. paper
This is a neural network algorithm for 3D bounding box detection from pointcloud data.
Good accuracy is proved in KITTI object detection benchmark.KITTI link

Plus, the fastest algorithm in the KITTI benchmark.
According to the paper, it runs in 16ms.
But by writing CUDA code for both preprocess and postprocess, it achieves 12~15ms runtime.

Tested on Shinjuku video and Nisshin video

Related issue

autowarefoundation/autoware_ai#534

Todos

  • Tests
  • Documentation

Interface

Input: /points_raw [sensor_msg::PointCloud2]
output: /detection/lidar_detector/objects [autoware_msgs::DetectedObjectArray]
output: /detection/lidar_detector/objects_markers [visualization_msgs::MarkerArray]

How to test

catkin_make run_tests_lidar_point_pillars_gtest

The reason for the small amount of test code is because some CUDA codes result in the different output for the same input.

How to launch

Dependency

  • CUDA 9.0 or 10.0
  • TensorRT: Tested with 5.0.2
  • Pretrained model can be downloaded from this repo
  • Also, please check the README.md for more details.

Launch

roslaunch lidar_point_pillars lidar_point_pillars.launch pfe_onnx_file:=/PATH/TO/FILE.onnx rpn_onnx_file:=/PATH/TO/FILE.onnx

Or, you can launch through runtime manager in Computing tab.

You will see visualized output by subscribing to /detection/lidar_detector/objects_markers

@yukkysaito
Copy link
Contributor

DeepLearning based detection packages manage setups individually too. Can you prepare the scripts directory and prepare the setup script? (I would like to manage it with ansible etc. in the future)

Copy link
Member

@amc-nu amc-nu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

terrific

@k0suke-murakami k0suke-murakami merged commit 105bbe4 into develop Mar 8, 2019
@k0suke-murakami k0suke-murakami deleted the feature/point_pillars branch March 8, 2019 06:09
* To display the results in Rviz `objects_visualizer` is required.
(Launch file launches automatically this node).

* Pretrained models are available [here](https://github.com/cirpue49/kitti_pretrained_pp), trained with the help of the KITTI dataset. For this reason, these are not suitable for commercial purposes. Derivative works are bound to the BY-NC-SA 3.0 License. (https://creativecommons.org/licenses/by-nc-sa/3.0/)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gbiggs @esteve @kfunaoka this package uses a pre-trained network that is not for commercial purposes.

I think that we need to find a way to clearly mark such packages, otherwise it will be difficult to find them later and it will also cause confusion with Autoware users trying to commercialize such packages.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree. I'll add it to my notes for the documentation template.

I think we also need a way for a user to easily select between using a non-commercial pre-trained network that we provide and a network they provide themselves (as well as a good method and documentation for how to train a network). This would improve the usefulness of Autoware without placing a burden on us to provide trained networks. I'll add this to my increasing pile of notes on what Autoware should be. :)

@kargarisaac
Copy link

Hi,
What is the trained model for? cars or people or what? In the paper, they mentioned that they trained 2 models for cars and pedestrians/cyclists?
If you use one model for all classes, what is the parameter values? such as thresholds, etc?
Thank you

@kargarisaac
Copy link

@k0suke-murakami
I tested this using sample_moriyama_150324.bag file and docker but got nothing. Do I need to install tensorrt for docker? It would be great to write down a pipeline to run it in docker or out of that. I tried to install the point_pillars package in my ros workspace, but not successful yet.

anubhavashok pushed a commit to NuronLabs/autoware.ai that referenced this pull request Sep 7, 2021
@mitsudome-r mitsudome-r added the version:autoware-ai Autoware.AI label Jun 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants