-
Notifications
You must be signed in to change notification settings - Fork 0
Calibration and Apriltag Configuration
There are a few config files you need to fill out depending on your specific setup for the robot and apriltags.
robot_ws/src/localizer/config/tags.yaml
To set up camera extrinsics, you will have to obtain t_B_CB (the translation from body to camera in body frame) and R_B_C (rotation matrix from camera frame to body frame). These don't have to be super accurate, just good enough. The frames for my setup are defined in the picture below. Note that the frame convention for the camera and IMU are predetermined, so the only different would be how you mount one relative to the other. For detailed descriptions about the frames, refer to page 2 and 3 of https://drive.google.com/file/d/1re9Iy_McZaQV1AxbQdOVJHL6NPVnaKzJ/view?usp=sharing.
To set up Apriltags, first print them. I used the tag36h11 family from https://github.com/AprilRobotics/apriltag-imgs. Make sure each tag's ID is unique when you print them out.
To then enter each tag into the configuration file, you will need to specify its id, its position, and rotation. For id, its simply the id of the tag. For position, you have to measure, from the center of the Apriltag, the x, y, and z displacement to the origin of the world (assumed to be the robot starting point).
For rotation, you can leave the roll to be 90 degrees, pitch to be 0, and adjust the yaw accordingly. For example, if the tag is facing the robot, the yaw is 180 degrees.
Once you've entered all your tags into the yaml file, it is useful to visualize them. To do this, run the following visualization script to plot the tag locations and orientations in the room:
./run_ros.sh # enter our docker container
cd scripts && python3 visualize_tags.py
# Camera Extrinsics
t_B_CB: [ 0.62, -0.147, 0.01 ] # translation from body to camera in body frame
R_B_C: [[ -1.0, 0.0, 0.0 ],
[ 0.0, -0.1, 0.995 ],
[0.0, 0.995, 0.1 ]] # camera nods slightly downwards
num_tags: 4
tag_id_0: 0
tag_position_0: [ 1.05, 0.38, 0.37 ] # x, y, z [m]
tag_rotation_0: [ 90.0, 0.0, 180.0 ] # roll, pitch, yaw [degrees]
tag_id_1: 1
tag_position_1: [ 4.65, 0.13, 0.39 ]
tag_rotation_1: [ 90.0, 0.0, 180.0 ]
tag_id_2: 2
tag_position_2: [ 3.08, 1.92, 0.405 ]
tag_rotation_2: [ 90.0, 0.0, -90.0 ]
tag_id_3: 3
tag_position_3: [ 1.98, 0.61, 0.365 ]
tag_rotation_3: [ 90.0, 0.0, 0.0 ]
Note that you can run the whole pipeline wtihout apriltags as well, just set num_tags
to 0. Note that then your odometry will accumulate error as the robot runs around for longer, but overall its pretty stable it should be couple for a dozen meters or so.
robot_ws/src/image_collector/config/image_collector_config.yaml
You are okay with leaving the defaults for pretty much everything here. However, useful parameters to tweak include capture period and blur_thres that affects how the images are collected.
image_collector_node:
ros__parameters:
capture_period: 1.0 # controls period at which you capture images
dataset_dir: /data/datasets/image_collector/train
blurry_dir: /data/datasets/image_collector/blurry
stamps_dir: /data/datasets/image_collector/stamps
save_imgs: True
blur_thres: 50.0 # lower makes filter less strict (i.e. tolerate more blur in train)
wipe_prev: True # wipe previous collected images
image_topic: /image_raw
odom_topic: /odom
visualize_tags.py
On L70-72, you may have to play around with the room size depending on the environment you are deploying your robot into.
...
room_size = [5, 3]
ax.set_xlim([-1, room_size[1]])
ax.set_ylim([-1, room_size[0]])
...