Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: /tf and /static_tf topics' inconsistencies #2676

Merged
Merged
14 changes: 11 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,12 @@
- For example: ```depth_qos:=SENSOR_DATA```
- Reference: [ROS2 QoS profiles formal documentation](https://docs.ros.org/en/rolling/Concepts/About-Quality-of-Service-Settings.html#qos-profiles)
- **Notice:** ***<stream_type>*_info_qos** refers to both camera_info topics and metadata topics.
- **tf_publish_rate**: double, positive values mean dynamic transform publication with specified rate, all other values mean static transform publication. Defaults to 0
- **tf_publish_rate**:
- double, rate (in Hz) at which dynamic transforms are published
- Default value is 0.0 Hz *(means no dynamic TF)*
- This param also depends on **publish_tf** param
- If **publish_tf:=false**, then no TFs will be published, even if **tf_publish_rate** is >0.0 Hz
- If **publish_tf:=true** and **tf_publish_rate** set to >0.0 Hz, then dynamic TFs will be published at the specified rate

#### Parameters that cannot be changed in runtime:
- **serial_no**:
Expand Down Expand Up @@ -277,8 +282,11 @@
- **linear_accel_cov**, **angular_velocity_cov**: sets the variance given to the Imu readings.
- **hold_back_imu_for_frames**: Images processing takes time. Therefor there is a time gap between the moment the image arrives at the wrapper and the moment the image is published to the ROS environment. During this time, Imu messages keep on arriving and a situation is created where an image with earlier timestamp is published after Imu message with later timestamp. If that is a problem, setting *hold_back_imu_for_frames* to *true* will hold the Imu messages back while processing the images and then publish them all in a burst, thus keeping the order of publication as the order of arrival. Note that in either case, the timestamp in each message's header reflects the time of it's origin.
- **publish_tf**:
- boolean, publish or not TF at all.
- Defaults to True.
- boolean, enable/disable publishing static and dynamic TFs
- Defaults to True
- So, static TFs will be published by default
- If dynamic TFs are needed, user should set the param **tf_publish_rate** to >0.0 Hz
- If set to false, both static and dynamic TFs won't be published, even if the param **tf_publish_rate** is set to >0.0 Hz
- **diagnostics_period**:
- double, positive values set the period between diagnostics updates on the `/diagnostics` topic.
- 0 or negative values mean no diagnostics topic is published. Defaults to 0.</br>
Expand Down
1 change: 1 addition & 0 deletions realsense2_camera/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,7 @@ set(SOURCES
src/named_filter.cpp
src/profile_manager.cpp
src/image_publisher.cpp
src/tfs.cpp
)

if(NOT DEFINED ENV{ROS_DISTRO})
Expand Down
10 changes: 7 additions & 3 deletions realsense2_camera/include/base_realsense_node.h
Original file line number Diff line number Diff line change
Expand Up @@ -155,16 +155,20 @@ namespace realsense2_camera
std::shared_ptr<Parameters> _parameters;
std::list<std::string> _parameters_names;

void restartStaticTransformBroadcaster();
void publishExtrinsicsTopic(const stream_index_pair& sip, const rs2_extrinsics& ex);
void calcAndPublishStaticTransform(const rs2::stream_profile& profile, const rs2::stream_profile& base_profile);
void calcAndAppendTransformMsgs(const rs2::stream_profile& profile, const rs2::stream_profile& base_profile);
void getDeviceInfo(const realsense2_camera_msgs::srv::DeviceInfo::Request::SharedPtr req,
realsense2_camera_msgs::srv::DeviceInfo::Response::SharedPtr res);
tf2::Quaternion rotationMatrixToQuaternion(const float rotation[9]) const;
void publish_static_tf(const rclcpp::Time& t,
void append_static_tf_msg(const rclcpp::Time& t,
const float3& trans,
const tf2::Quaternion& q,
const std::string& from,
const std::string& to);
void erase_static_tf_msg(const std::string& frame_id,
const std::string& child_frame_id);
void eraseTransformMsgs(const stream_index_pair& sip, const rs2::stream_profile& profile);
void setup();

private:
Expand Down Expand Up @@ -200,7 +204,7 @@ namespace realsense2_camera
void updateExtrinsicsCalibData(const rs2::video_stream_profile& left_video_profile, const rs2::video_stream_profile& right_video_profile);
void updateStreamCalibData(const rs2::video_stream_profile& video_profile);
void SetBaseStream();
void publishStaticTransforms(std::vector<rs2::stream_profile> profiles);
void publishStaticTransforms();
void startDynamicTf();
void publishDynamicTransforms();
void publishPointCloud(rs2::points f, const rclcpp::Time& t, const rs2::frameset& frameset);
Expand Down
2 changes: 2 additions & 0 deletions realsense2_camera/launch/rs_intra_process_demo_launch.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@
{'name': 'enable_gyro', 'default': 'false', 'description': "enable gyro stream"},
{'name': 'enable_accel', 'default': 'false', 'description': "enable accel stream"},
{'name': 'intra_process_comms', 'default': 'true', 'description': "enable intra-process communication"},
{'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
{'name': 'tf_publish_rate', 'default': '0.0', 'description': '[double] rate in HZ for publishing dynamic TF'},
]

def declare_configurable_parameters(parameters):
Expand Down
3 changes: 2 additions & 1 deletion realsense2_camera/launch/rs_launch.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,8 @@
{'name': 'initial_reset', 'default': 'false', 'description': "''"},
{'name': 'allow_no_texture_points', 'default': 'false', 'description': "''"},
{'name': 'pointcloud.ordered_pc', 'default': 'false', 'description': ''},
{'name': 'tf_publish_rate', 'default': '0.0', 'description': 'Rate of publishing static_tf'},
{'name': 'publish_tf', 'default': 'true', 'description': '[bool] enable/disable publishing static & dynamic TF'},
{'name': 'tf_publish_rate', 'default': '0.0', 'description': '[double] rate in Hz for publishing dynamic TF'},
{'name': 'diagnostics_period', 'default': '0.0', 'description': 'Rate of publishing diagnostics. 0=Disabled'},
{'name': 'decimation_filter.enable', 'default': 'false', 'description': 'Rate of publishing static_tf'},
{'name': 'rosbag_filename', 'default': "''", 'description': 'A realsense bagfile to run from as a device'},
Expand Down
265 changes: 0 additions & 265 deletions realsense2_camera/src/base_realsense_node.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
#include "assert.h"
#include <algorithm>
#include <mutex>
#include <geometry_msgs/msg/vector3_stamped.hpp>
#include <rclcpp/clock.hpp>
#include <fstream>
#include <image_publisher.h>
Expand Down Expand Up @@ -121,15 +120,6 @@ BaseRealSenseNode::BaseRealSenseNode(rclcpp::Node& node,
ROS_INFO("Intra-Process communication enabled");
}

// intra-process do not support latched QoS, so we need to disable intra-process for this topic
rclcpp::PublisherOptionsWithAllocator<std::allocator<void>> options;
options.use_intra_process_comm = rclcpp::IntraProcessSetting::Disable;
#ifndef DASHING
_static_tf_broadcaster = std::make_shared<tf2_ros::StaticTransformBroadcaster>(node, tf2_ros::StaticBroadcasterQoS(), std::move(options));
#else
_static_tf_broadcaster = std::make_shared<tf2_ros::StaticTransformBroadcaster>(node, rclcpp::QoS(100), std::move(options));
#endif

_image_format[1] = CV_8UC1; // CVBridge type
_image_format[2] = CV_16UC1; // CVBridge type
_image_format[3] = CV_8UC3; // CVBridge type
Expand Down Expand Up @@ -470,90 +460,6 @@ void BaseRealSenseNode::imu_callback(rs2::frame frame)
publishMetadata(frame, t, OPTICAL_FRAME_ID(stream_index));
}

void BaseRealSenseNode::pose_callback(rs2::frame frame)
{
double frame_time = frame.get_timestamp();
bool placeholder_false(false);
if (_is_initialized_time_base.compare_exchange_strong(placeholder_false, true) )
{
_is_initialized_time_base = setBaseTime(frame_time, frame.get_frame_timestamp_domain());
}

ROS_DEBUG("Frame arrived: stream: %s ; index: %d ; Timestamp Domain: %s",
rs2_stream_to_string(frame.get_profile().stream_type()),
frame.get_profile().stream_index(),
rs2_timestamp_domain_to_string(frame.get_frame_timestamp_domain()));
rs2_pose pose = frame.as<rs2::pose_frame>().get_pose_data();
rclcpp::Time t(frameSystemTimeSec(frame));

geometry_msgs::msg::PoseStamped pose_msg;
pose_msg.pose.position.x = -pose.translation.z;
pose_msg.pose.position.y = -pose.translation.x;
pose_msg.pose.position.z = pose.translation.y;
pose_msg.pose.orientation.x = -pose.rotation.z;
pose_msg.pose.orientation.y = -pose.rotation.x;
pose_msg.pose.orientation.z = pose.rotation.y;
pose_msg.pose.orientation.w = pose.rotation.w;

static tf2_ros::TransformBroadcaster br(_node);
geometry_msgs::msg::TransformStamped msg;
msg.header.stamp = t;
msg.header.frame_id = DEFAULT_ODOM_FRAME_ID;
msg.child_frame_id = FRAME_ID(POSE);
msg.transform.translation.x = pose_msg.pose.position.x;
msg.transform.translation.y = pose_msg.pose.position.y;
msg.transform.translation.z = pose_msg.pose.position.z;
msg.transform.rotation.x = pose_msg.pose.orientation.x;
msg.transform.rotation.y = pose_msg.pose.orientation.y;
msg.transform.rotation.z = pose_msg.pose.orientation.z;
msg.transform.rotation.w = pose_msg.pose.orientation.w;

if (_publish_odom_tf) br.sendTransform(msg);

if (0 != _odom_publisher->get_subscription_count())
{
double cov_pose(_linear_accel_cov * pow(10, 3-(int)pose.tracker_confidence));
double cov_twist(_angular_velocity_cov * pow(10, 1-(int)pose.tracker_confidence));

geometry_msgs::msg::Vector3Stamped v_msg;
tf2::Vector3 tfv(-pose.velocity.z, -pose.velocity.x, pose.velocity.y);
tf2::Quaternion q(-msg.transform.rotation.x,-msg.transform.rotation.y,-msg.transform.rotation.z,msg.transform.rotation.w);
tfv=tf2::quatRotate(q,tfv);
v_msg.vector.x = tfv.x();
v_msg.vector.y = tfv.y();
v_msg.vector.z = tfv.z();

tfv = tf2::Vector3(-pose.angular_velocity.z, -pose.angular_velocity.x, pose.angular_velocity.y);
tfv=tf2::quatRotate(q,tfv);
geometry_msgs::msg::Vector3Stamped om_msg;
om_msg.vector.x = tfv.x();
om_msg.vector.y = tfv.y();
om_msg.vector.z = tfv.z();

nav_msgs::msg::Odometry odom_msg;

odom_msg.header.frame_id = DEFAULT_ODOM_FRAME_ID;
odom_msg.child_frame_id = FRAME_ID(POSE);
odom_msg.header.stamp = t;
odom_msg.pose.pose = pose_msg.pose;
odom_msg.pose.covariance = {cov_pose, 0, 0, 0, 0, 0,
0, cov_pose, 0, 0, 0, 0,
0, 0, cov_pose, 0, 0, 0,
0, 0, 0, cov_twist, 0, 0,
0, 0, 0, 0, cov_twist, 0,
0, 0, 0, 0, 0, cov_twist};
odom_msg.twist.twist.linear = v_msg.vector;
odom_msg.twist.twist.angular = om_msg.vector;
odom_msg.twist.covariance ={cov_pose, 0, 0, 0, 0, 0,
0, cov_pose, 0, 0, 0, 0,
0, 0, cov_pose, 0, 0, 0,
0, 0, 0, cov_twist, 0, 0,
0, 0, 0, 0, cov_twist, 0,
0, 0, 0, 0, 0, cov_twist};
_odom_publisher->publish(odom_msg);
ROS_DEBUG("Publish %s stream", rs2_stream_to_string(frame.get_profile().stream_type()));
}
}

void BaseRealSenseNode::frame_callback(rs2::frame frame)
{
Expand Down Expand Up @@ -843,114 +749,6 @@ void BaseRealSenseNode::updateExtrinsicsCalibData(const rs2::video_stream_profil
_camera_info[right].p.at(7) = -fy * ex.translation[1] + 0.0; // Ty - avoid -0.0 values.
}

tf2::Quaternion BaseRealSenseNode::rotationMatrixToQuaternion(const float rotation[9]) const
{
Eigen::Matrix3f m;
// We need to be careful about the order, as RS2 rotation matrix is
// column-major, while Eigen::Matrix3f expects row-major.
m << rotation[0], rotation[3], rotation[6],
rotation[1], rotation[4], rotation[7],
rotation[2], rotation[5], rotation[8];
Eigen::Quaternionf q(m);
return tf2::Quaternion(q.x(), q.y(), q.z(), q.w());
}

void BaseRealSenseNode::publish_static_tf(const rclcpp::Time& t,
const float3& trans,
const tf2::Quaternion& q,
const std::string& from,
const std::string& to)
{
geometry_msgs::msg::TransformStamped msg;
msg.header.stamp = t;
msg.header.frame_id = from;
msg.child_frame_id = to;

// Convert x,y,z (taken from camera extrinsics)
// from optical cooridnates to ros coordinates
msg.transform.translation.x = trans.z;
msg.transform.translation.y = -trans.x;
msg.transform.translation.z = -trans.y;

msg.transform.rotation.x = q.getX();
msg.transform.rotation.y = q.getY();
msg.transform.rotation.z = q.getZ();
msg.transform.rotation.w = q.getW();
_static_tf_msgs.push_back(msg);
}

void BaseRealSenseNode::publishExtrinsicsTopic(const stream_index_pair& sip, const rs2_extrinsics& ex){
// Publish extrinsics topic:
Extrinsics msg = rsExtrinsicsToMsg(ex);
if (_extrinsics_publishers.find(sip) != _extrinsics_publishers.end())
{
_extrinsics_publishers[sip]->publish(msg);
}
}

void BaseRealSenseNode::calcAndPublishStaticTransform(const rs2::stream_profile& profile, const rs2::stream_profile& base_profile)
{
// Transform base to stream
stream_index_pair sip(profile.stream_type(), profile.stream_index());
tf2::Quaternion quaternion_optical;
quaternion_optical.setRPY(-M_PI / 2, 0.0, -M_PI / 2);
float3 zero_trans{0, 0, 0};
tf2::Quaternion zero_rot_quaternions;
zero_rot_quaternions.setRPY(0, 0, 0);

rclcpp::Time transform_ts_ = _node.now();

// extrinsic from A to B is the position of A relative to B
// TF from A to B is the transformation to be done on A to get to B
// so, we need to calculate extrinsics in two opposite ways, one for extrinsic topic
// and the second is for transformation topic (TF)
rs2_extrinsics normal_ex; // used to for extrinsics topic
rs2_extrinsics tf_ex; // used for TF

try
{
normal_ex = base_profile.get_extrinsics_to(profile);
tf_ex = profile.get_extrinsics_to(base_profile);
}
catch (std::exception& e)
{
if (!strcmp(e.what(), "Requested extrinsics are not available!"))
{
ROS_WARN_STREAM("(" << rs2_stream_to_string(base_profile.stream_type()) << ", " << base_profile.stream_index() << ") -> (" << rs2_stream_to_string(profile.stream_type()) << ", " << profile.stream_index() << "): " << e.what() << " : using unity as default.");
normal_ex = rs2_extrinsics({{1, 0, 0, 0, 1, 0, 0, 0, 1}, {0,0,0}});
tf_ex = normal_ex;
}
else
{
throw e;
}
}

// publish normal extrinsics e.g. /camera/extrinsics/depth_to_color
publishExtrinsicsTopic(sip, normal_ex);

// publish static TF
auto Q = rotationMatrixToQuaternion(tf_ex.rotation);
Q = quaternion_optical * Q * quaternion_optical.inverse();
float3 trans{tf_ex.translation[0], tf_ex.translation[1], tf_ex.translation[2]};
publish_static_tf(transform_ts_, trans, Q, _base_frame_id, FRAME_ID(sip));

// Transform stream frame to stream optical frame and publish it
publish_static_tf(transform_ts_, zero_trans, quaternion_optical, FRAME_ID(sip), OPTICAL_FRAME_ID(sip));

if (profile.is<rs2::video_stream_profile>() && profile.stream_type() != RS2_STREAM_DEPTH && profile.stream_index() == 1)
{
publish_static_tf(transform_ts_, trans, Q, _base_frame_id, ALIGNED_DEPTH_TO_FRAME_ID(sip));
publish_static_tf(transform_ts_, zero_trans, quaternion_optical, ALIGNED_DEPTH_TO_FRAME_ID(sip), OPTICAL_FRAME_ID(sip));
}

if ((_imu_sync_method > imu_sync_method::NONE) && (profile.stream_type() == RS2_STREAM_GYRO))
{
publish_static_tf(transform_ts_, zero_trans, zero_rot_quaternions, FRAME_ID(sip), IMU_FRAME_ID);
publish_static_tf(transform_ts_, zero_trans, quaternion_optical, IMU_FRAME_ID, IMU_OPTICAL_FRAME_ID);
}
}

void BaseRealSenseNode::SetBaseStream()
{
const std::vector<stream_index_pair> base_stream_priority = {DEPTH, POSE};
Expand Down Expand Up @@ -981,69 +779,6 @@ void BaseRealSenseNode::SetBaseStream()
_base_profile = available_profiles[*base_stream];
}

void BaseRealSenseNode::publishStaticTransforms(std::vector<rs2::stream_profile> profiles)
{
// Publish static transforms
if (_publish_tf)
{
for (auto &profile : profiles)
{
calcAndPublishStaticTransform(profile, _base_profile);
}
if (_static_tf_broadcaster)
_static_tf_broadcaster->sendTransform(_static_tf_msgs);
}
}

void BaseRealSenseNode::startDynamicTf()
{
if (_tf_publish_rate > 0)
{
ROS_WARN("Publishing dynamic camera transforms (/tf) at %g Hz", _tf_publish_rate);
if (!_tf_t)
{
_dynamic_tf_broadcaster = std::make_shared<tf2_ros::TransformBroadcaster>(_node);
_tf_t = std::make_shared<std::thread>([this]()
{
publishDynamicTransforms();
});
}
}
else
{
if (_tf_t && _tf_t->joinable())
{
_tf_t->join();
_tf_t.reset();
_dynamic_tf_broadcaster.reset();
}
}
}

void BaseRealSenseNode::publishDynamicTransforms()
{
// Publish transforms for the cameras
std::unique_lock<std::mutex> lock(_publish_dynamic_tf_mutex);
while (rclcpp::ok() && _is_running && _tf_publish_rate > 0)
{
_cv_tf.wait_for(lock, std::chrono::milliseconds((int)(1000.0/_tf_publish_rate)), [&]{return (!(_is_running && _tf_publish_rate > 0));});
{
std::lock_guard<std::mutex> lock_guard(_publish_tf_mutex);
rclcpp::Time t = _node.now();
try
{
for(auto& msg : _static_tf_msgs)
msg.header.stamp = t;
_dynamic_tf_broadcaster->sendTransform(_static_tf_msgs);
}
catch(const std::exception& e)
{
ROS_ERROR_STREAM("Error publishing dynamic transforms: " << e.what());
}
}
}
}

void BaseRealSenseNode::publishPointCloud(rs2::points pc, const rclcpp::Time& t, const rs2::frameset& frameset)
{
std::string frame_id = (_align_depth_filter->is_enabled() ? OPTICAL_FRAME_ID(COLOR) : OPTICAL_FRAME_ID(DEPTH));
Expand Down
1 change: 0 additions & 1 deletion realsense2_camera/src/parameters.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ void BaseRealSenseNode::getParameters()
startDynamicTf();
});
_parameters_names.push_back(param_name);
startDynamicTf();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this change the behavior from default start publishing to not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the node is started, "startDynamicTf()" is getting called twice. So, the prints are also coming twice as shown below:
image

With this change, it will be called only once.

Basic testing is done. Will test for corner cases as well.


param_name = std::string("diagnostics_period");
_diagnostics_period = _parameters->setParam<double>(param_name, DIAGNOSTICS_PERIOD);
Expand Down
Loading