-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rosbag Camera Info viewing and Multicam #9846
Comments
Hi @parthapn98
Whilst I do not have a specific script example for querying a bag's serial number with Python, my assumption would be that once you specify in your Python script that the data source is a bag file instead of a live camera with enable_device_from_file then you should be able to use the same code that is used to read the serial number of a live camera. The link below has a simple example of a Python script for printing the serial number of multiple RealSense cameras.
|
Hi Marty, thanks for the response. I have already tried it that way and specified the following rs.config.enable_device_from_file(config, args.input, repeat_playback=False)
# Configure the pipeline to stream the depth stream
# Change this parameters according to the recorded bag file resolution
config.enable_stream(rs.stream.depth, rs.format.z16, 30)
config.enable_stream(rs.stream.color, rs.format.rgb8, 30)#######
# Start streaming from file
profile = pipeline.start(config)
playback = profile.get_device().as_playback()
playback.set_real_time(False)
# Create opencv window to render image in
cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)
# Create colorizer object
colorizer = rs.colorizer()
ctx = rs.context()
for d in ctx.devices:
inf = d.get_info(rs.camera_info.serial_number)
print(inf) This however only works if the camera is connected and throws an error otherwise. Thanks for the link to the second topic |
Another RealSense user who was saving two cameras in one bag used the code below for reading the serial number from a bag file. It partly worked for them but would only print one serial number.
I would therefore consider the possibility that although multiple cameras are attached, because the data is being merged into a single bag file then only one of the cameras has its serial stored in the bag. |
My understanding is that when playing back a bag file using scripting, if you define cfg configurations for the streams then they have to match the content of the specific recorded bag that is being played back. For example, if you recorded a bag file in the Viewer with depth only in it then there should only be a cfg definition for depth in the bag playback script. |
Hi Marty, We think we've narrowed down the issue causing this for us.
It is similar to issue 9695, and 5848 but we would prefer not to manually intervene with individual files.
But it doesnot solve the issue //Didnot attempt with the same pipe1 for both as pipe.start() starts the file writer as well |
#7932 provides an example of Python code for skipping the first 5 frames during playback in the script under the Loading Code heading. You could certainly change '5' to '3'.
I see no reason why the same code could not be used in the recording part of your application too to skip the first few frames when the pipeline is started. |
This comment has been minimized.
This comment has been minimized.
Found and resolved issue, thanks for your help @MartyG-RealSense issue #4249's original code helped with the initial skipping during recording, i doubt we will have an issue with unaligned frame numbers since our unpacking is different. Im closing this issue since its resolved |
Great to hear that you developed a solution, @parthapn98 - thanks very much for the update! |
@MartyG-RealSense Quick question, Background:
Questions:
Asking because I was unable to find direct answers for all, and even just our experiment results are contradictory. |
RealSense 400 Series cameras do not interfere with each other when used in a multicam arrangement without hardware sync. The infrared imaging sensors do not have an IR Cut filter though and so can see all visible light frequencies. This is why the T265 Tracking Camera model has an IR Cut filter so that it can be paired with a 400 Series camera in close proximity, as the T265's IR Cut filter blocks it from seeing the dot pattern projection from the 400 Series model. You could try reducing the Laser Power setting of the cameras, which will make the IR dot pattern projection less visible. This can have a negative impact on the amount of detail in the depth image. If the scene is well lit though then the camera can alternatively use ambient light in the scene to analyze surfaces for depth detail instead of using the IR dot pattern projection and so you could experiment with turning the IR Emitter off completely (it is a separate component from the left and right IR imaging sensors). The Viewer will not be hardware syncing the cameras unless the Inter Cam Sync Mode of the cameras has been set and the cameras are connected by sync cabling or a wireless sync trigger pulse transmitted by a signal generator is being used. If you are not currently using hardware sync then I would recommend having the Inter Cam Sync Mode set to '0' on all cameras. On the D435 model the projector pulses to coincide with camera exposure, whilst on the D415 model the projector is always on. The D435's projector can be set to be always on like it is with the D415, as you have already done. This subject is discussed in the section of Intel's projector white-paper linked to below https://dev.intelrealsense.com/docs/projectors#section-9-using-multiple-depth-cameras In regard to the number of cameras that you can use: if all of the cameras are attached to the same computer then the number of cameras that can be used may depend on whether all cameras are operating simultaneously, since computer resources will be being consumed. If all cameras are capturing at the same time then Intel has previously recommended using a computer with an i7 processor for a four-camera setup. If the cameras are not all capturing simultaneously and are turning on and off in a sequence (e.g 1-4 on and then off, and then 5-8 on and then off, etc) then you could use a greater number of cameras on the same computer. If you use multiple computers and spread the cameras across them then that should greatly increase the number of cameras that can be used. As an example, the CONIX Research Center at Carnegie Mellon developed a system for generating point clouds from up to 20 RealSense cameras and then stitching them together, with one cameras per computer for capturing and a central computer to process all of the captured data. https://github.com/conix-center/pointcloud_stitching Intel also demonstrated a simpler form of such a system in January 2018, with four D435 capturing a scene and sending the data to a fifth computer for final processing. https://www.intelrealsense.com/intel-realsense-volumetric-capture/ |
are there any solved issues on bag file errors recorded with a python script? During playback: The recording duration of the bag is fine, but then it jumps to a timestamp that very high "35xxxx.." Any suggestions? |
You should not be able to start a recording of a particular stream on a particular camera if recording of that stream has already started, as that camera stream will already be 'claimed' by the active recording. You might receive an error that the camera is busy or it 'failed to set power state' if you try to start a new pipeline after one has already been started though. Checking if a stream is already busy is not straightforward. The error generated by a busy state is typically handled by catching the error in an exception check so that the program does not crash. An example of exception catching in Python is highlighted in the link below. You could also have greater control over which camera is recording to which file by creating a separate pipeline and unique cfg configuration for each camera, manually providing the exact serial number for each camera instead of letting the program automatically detect the serial numbers and build a list of devices. A Python example of this is in #1735 (comment) Have you also tried playing the affected bag files back in the RealSense Viewer by drag-and-dropping the bag file into the Viewer's central panel and observing the timestamps to see whether the jump occurs in the Viewer playback? If it does, this could indicate that there is a problem in the recording rather than in your Python code. |
Yes This is how we decide to either start or abort the recording, or just let the script crash on pipe.start
We do a combination of these, by first creating a list of serial numbers and then create a list of pipes, profiles and configs each while using the serial number list for i in range(len(serials)):
pipe[i] = rs.pipeline()
conf[i] = rs.config()
conf[i].enable_device(serials[i])
if All:
conf[i].enable_all_streams()
else:
if Depth:
conf[i].enable_stream(rs.stream.depth, wd, hd, rs.format.z16, fps_depth)
if IR1:
conf[i].enable_stream(rs.stream.infrared, 1, wd, hd, rs.format.y8, fps_depth)
if IR2:
conf[i].enable_stream(rs.stream.infrared, 2, wd, hd, rs.format.y8, fps_depth)
if RGB:
conf[i].enable_stream(rs.stream.color, w, h, rs.format.rgb8, fps)
if Motion:
# conf[i].enable_stream(rs.motion_stream, -1)
conf[i].enable_stream(rs.stream.accel)
conf[i].enable_stream(rs.stream.gyro)
# conf[i].enable_all_streams()
conf[i].enable_record_to_file(out[i])
# prof[i] = pipe[i].start(conf[i]) The problem is less frequent if we use option ALL, however its never completely gone. The remaining relevant part of the code is for i in range(len(serials)):
prof[i] = pipe[i].start(conf[i])
sensor[i] = prof[i].get_device().first_depth_sensor()
sensor[i].set_option(rs.option.emitter_always_on, 1)
record[i] = prof[i].get_device().as_recorder()
rs.recorder.pause(record[i])
# delay warmuo
for i in range(len(serials)):
rs.recorder.resume(record[i])
# delay recording duration
#--------------------- we tried this as optional for a fix, it did not resolve the issue
# for i in range(len(serials)):
# rs.recorder.pause(record[i]) # pause before closing stream
##delay pause
#--------------------
for i in range(len(serials)):
pipe[i].stop()
conf[i].disable_all_streams()
del pipe
del prof
del conf
del record
del sensor The problem is faced even if we write out a separate short script to record from just one camera instead of auto detecting like this.
We confirmed the timestamp issue with realsense viewer itself and not rqt or python bagreadeer |
I wonder whether the incorrect timestamp on the final frame is a consequence of how a bag file is 'indexed' by the SDK when a recording is ended by closing the pipeline. Lookng at the code, the line conf[i].disable_all_streams() after the pipe stop instruction is likely to be unnecessary as all of the streams that are currently active on a pipeline should be automatically stopped anyway when the pipeline is closed. I think it's a good idea that you implemented a pause instruction before the pipe stop. |
It looks as though out of the three bags playing simultaneously in the Viewer, one is playing normally and the other two have end times that are completely wrong and that is somehow affecting the visual updating of the images. The timelines of the affected bags progress at a normal time rate that is the same as the working bag (even though the images are not updating) and then both start and end times become wrong after the jump to the final frame. Do the bags play normally in the Viewer if played individually instead of three at the same time? If so then this might suggest that the computer that the bags are being played on cannot cope with the processing demands of a simultaneous 3-bag playback. When you say that the bags were recorded seconds apart, do you mean sequentially - that an active recording was stopped and then a new recording to a different file was rapidly begun? The rosbag record bag recording function of ROS provides an automated means to do this called --split, where you can define a maximum recording duration or file size, after which the bag file recording is closed and a new file starts recording seamlessly from that point onwards. So you could set the recording running and let rosbag record's split command take care of the start and stop process during recording. |
The images are updating sorry there's not much distinguishing movement, I didnt realise it till after. But if you maximise and look closely the corresponding movement and update of frames are seen
Yes they are identical when played back individually. I did em simultaneously to record them in one mp4. We have done record+playback on multiple tiers of devices to rule out hardware causes. And as mentioned even with just a single camera the issues are present
I run my bagrecorder script N number of times, some outputs are with glitched timestamps, some are fine. I will take a look at ROS recording, currently we only use ROS for further analysis of our recorded bags and we would prefer to keep the dependencies of this code limited to pyrealsense2 only |
So the script's code seems to be okay if it can record correctly sometimes. It would seem to be a question of stability then and reproducing recording results consistently. It may be worth shutting down and rebooting the computer inbetween each individual recording test to eliminate the possibility that the script is malfunctioning when run again after the previous test. This can occur with RealSense programs if the program was not properly closed down after the previous session of it. |
@MartyG-RealSense i ran some tests, the scripts ability to create a bagfile without the issue in reference isnt dependent on when its being run. i.e. not all first runs are error free On a different topic. are there any existing code for recording cameras in ros? |
Some ROS users have created Python projects to activate and control the rosbag record bag recording tool of ROS. An example of a ROS node script in the Python language that was published in 2020 is in the link below. https://gist.github.com/vbschettino/13c1d0102fa184811b31397cb9a4e85d |
Thanks @MartyG-RealSense Ill check it out. Before i leave for the weekend: is there a way to use rs-server for standard linux based computers not a raspi? Ive got librealsense built, rs enumerate shows device status and rs-server shows server started. If i try to add network device without running rs-server on host it gives this: |
rs-server should be able to be compiled on any Linux computer, not just a Raspberry Pi. The open-source ethernet white-paper uses a Pi 4 board by default but states that the project can be adapted with minor modifications for other computing brands, citing Up Board as an example. The design of the open-source networking system is that you can have multiple remote boards that have a camera attached to them, but there is a single host central computer to receive the feeds from the remote cameras. The design of the librealsense SDK is that once an application has 'claimed' access to a particular stream on a particular computer then that stream cannot be accessed by another application (such as an instance of the Viewer on a second host computer). Multiple applications can access different streams on the same camera, but two applications cannot access the same camera stream at the same time - the application that originally claimed the stream must disable that stream to release the claim on it and make it available to other applications to access. The access rules are described in the SDK's Multi-Streaming Model documentation in the link below. |
Yes I expected the same from experience with the RS camera behaviours. If you open up a particular stream, that cannot be accessed by anything else. However, the original stream is not open anywhere, neither the receiver PC or the client where rs-server is started.
If i
It is similar to the issue faced by this user. But for me the errors are seen on realsense-viewer not python with pyrealsense trying to access the streams |
Can you confirm whether PC1 and PC2 are hosts or the remote board that the camera is attached to, please. The white paper states that rs-server should be installed on Raspberry Pi, indicating that it is on the remote board and not the host computer that runs the RealSense Viewer. |
Hi Marty, PC1 is a linux computer to which a d435 is connected. PC1 runs the rs server |
Thanks very much for the clarification. The white paper describes that rs-server on the remote computer connects to the first available camera [attached to the remote computer] and listens for incoming connection requests [presumably Network Camera connection requests made by the Viewer on the host computer]. So when rs-server is not run and a Network Camera request is made by the Viewer on the host, it is logical that the [getControls] error: Connection to server failed: Connection refused - -107 error would occur, because rs-server is not listening for incoming connection requests whilst it is inactive and so a bridge between remote and host cannot be established. There is no interface active on the remote computer for the Viewer to talk to. You also tried first running rs-server on the remote computer and then connecting to it from the Viewer on the host but received the error out of range value for argument "def". You mentioned earlier in the case that you did not configure at static IP because you are not using a Pi. Whilst the requirement for a static IP does not seem to be exclusively for the Pi, the paper does suggest that the system has the flexibility to use other types of connection, such as wireless wi-fi / WLAN. Can you confirm please whether the problem is that you can connect to the remote camera successfully with the original host computer (PC2) but you cannot connect with a second host computer that is also equipped with the Viewer? |
Hi Marty, The rs-server was initially run with the following setup:
Next Setup: Ethernet router+static ip
|
Hi Marty, Is there a way to do alignment of the depth frame to colour after receiving it via ROS? I know we could do the alignment directly with rs_aligned_depht.launch but since the aligned depth image is resized to rgb stream, it will block up a significant bandwidth in our setup. |
Perhaps you could create a ROS adaptation of Intel's open-source ethernet system described in the link below, which streams RealSense data from a camera on a 'remote' board such as Raspberry Pi to a program such as the RealSense Viewer running on a central 'host' computer. This adaptation possibility is suggested by the paper itself. When it mentions ROS, it has the number [10] beside it that refers to a RealSense ROS guide listed in the references at the bottom of the paper for using 3 cameras across 2 computers: A RealSense ROS user in IntelRealSense/realsense-ros#1808 has already performed a ROS adaptation of the paper's networking system. |
Hi Marty, we already have a working remote ros system with the realsense where we can access the stream from a client on the network. My question is regarding is it possible to somehow use the realsense SDK to perform alignment of depth to RGB on the client side? Because the frames are no longer in the 'frameset' format to the receiver and instead published as images. We do not publish the aligned depth stream from the hosts as these are resized to RGB resolution which is typically higher, and takes up more bandwidth. The ros questions are separate in nature to the rs-server. We are exploring this since the rs-server is not working yet. |
A RealSense Python user in #5296 took the approach of using pickling for remote alignment by transmitting the unaligned data. |
Hi Marty, Is there a way to set emitter always on/ other emitter controls from the ros launch files? Whats the procedure for doing intercam sync between multiple cameras on ros? (Hardware sync cables involved) |
IntelRealSense/realsense-ros#1472 (comment) suggests setting the emitter_always_on ROS parameter. I could not find an example of a ROS1 rosparam launch file code for emitter_always_on. I believe that the code below may work though:
Alternatively, you could add the instruction emitter_always_on:=true to the roslaunch instruction. For example: roslaunch realsense2_camera rs_camera.launch emitter_always_on:=true Regarding multiple camera sync, on the 400 Series cameras Inter Cam Sync Mode should be set to '1' for the Master camera and '2' for the Slave cameras. enable_sync is not related to multiple camera hardware sync. It instead gathers the closest frames of different sensors - infra red, color and depth - to be sent with the same timetag. You can define Inter Cam Sync Mode in ROS with the parameter inter_cam_sync_mode and information about setting it in ROS1 can be found in IntelRealSense/realsense-ros#984 (comment) You do not necessarily need to use hardware sync in ROS for a multiple camera setup though. Intel have published ROS guidance for 2 cameras on one PC and 3 cameras across 2 PCs. https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/ |
Hi Marty, Thanks for your help with the ros settings above. We have come across an issue that crosses between the ros and sdk usage in python. Ill try to to it explain it concisely first
ctx = rs.context()
d = ctx.devices
I know other users with realsense have explored software based usb reset in the issues, have you come across anything that succeeded reliably? Edit: Do you have any recommendations for solving this without usb reset? |
You could try a multiple camera version of the hardware_reset() instruction to reset only the cameras and not the entire USB port. An example of such a script in Python is in #5428 (comment) Note in that script, a list of all attached cameras is automatically built with ctx.query_devices() |
Is the camera detectable if you first shut down the ROS launch with rosnode kill --all instead of using Ctrl-C? |
As an alternative to closing down the ROS wrapper with Ctrl-C, would it be possible to run your Python code from within the active RealSense ROS wrapper as a Python node script, as discussed in #6367 regarding the ROS wrapper's rs_test.py example script. IntelRealSense/realsense-ros#1086 (comment) also has guidance about using another of the wrapper's example Python scripts called show_center_depth.py |
Hi Marty, Is there an option to enable the coloriser filter in the launch files for ros? |
Hi @parthapn98 You can enable the colorizer as a filter in ROS, similar to defining a pointcloud filter. For example, if you only needed a colorizer filter then you could define it in the roslaunch instruction like this: roslaunch realsense2_camera rs_camera.launch filters:=colorizer If you needed both a pointcloud and colorizer then you could define it like this, putting a comma with no spaces between the filter types: roslaunch realsense2_camera rs_camera.launch filters:=colorizer,pointcloud If you prefer to set it in the launch file, I have not got a precise example for the colorizer. However, I do have one for other types of filters:
So I would speculate (not certain) that as the colorizer is a 'filter' type, if it is possible to set it in the launch file then a launch file arg for it might look like this:
|
Hi Marty, We are pretty happy with the coloriser and its predefined compression for the compressed topic
thr_filter.set_option(RS2_OPTION_MIN_DEPTH, min_depth);
thr_filter.set_option(RS2_OPTION_MAX_DEPTH, max_depth);
color_filter.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED, 0);
color_filter.set_option(RS2_OPTION_COLOR_SCHEME, 9.0f); // Hue colorization
color_filter.set_option(RS2_OPTION_MAX_DEPTH, max_depth);
color_filter.set_option(RS2_OPTION_MIN_DEPTH, min_depth);
|
Using this guidance, you could potentially query what parameters are available for the colorizer filter in the ROS wrapper by using a rosrun dynamic_reconfigure dynparam get command. IntelRealSense/realsense-ros#2140 (comment) has a reference to setting histogram equalization using /camera/colorizer/
|
The rqt recofigure had all the options for the coloriser including the jpeg/png compression quality settings We had some issues when we shifted the test to our embedded device(jetson nano) its cpus are not really powerfull enough to even apply the coloriser filter at more than 15fps, anything with filters use a cpu and the nano loses fps significantly. Is there any implementation from a user who leveraged the gpu on a nano? or used a different CPU embedded device that could provide sufficient compute power? |
I'm pleased to hear that you were able to find the colorizer settings that you needed. If librealsense has CUDA support enabled to make use of an Nvidia GPU (like the one that Jetson boards have) then it can provide acceleration for color conversion, depth-color alignment and pointclouds. If the type of processing that you are performing does not involve those specific types of operation then you may not benefit from CUDA support being enabled. Also, if the ROS wrapper was installed from packages with the wrapper's Method 1 (librealsense and the wrapper installed together) then CUDA support would not be included in the librealsense build. Intel strongly recommends that RealSense users with a Nano enable the barrel jack connector for extra power using the information in the link below if your Nano model supports that. https://www.jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/ In regard to alternative embedded devices, you could consider a more powerful Jetson board model so that you can still take advantage of an Nvidia GPU, though this would have increased cost. You could also explore Intel's NUC computing range. They are small enough to fit on a mobile robot, suited to industrial applications and are available in a wide range of configurations (from low end to high end) and price points, and in different form factors (board, partially completed kit and complete ready-to-run mini-PC). NUCs also have a long history of being used with RealSense cameras dating back to the original generation of RealSense in 2014. https://www.intel.com/content/www/us/en/products/details/nuc.html There is also the hardware acceleration solution for vision computing called Intel Neural Compute Stick 2 (NCS2), which can be simply plugged into a USB port alongside the camera. It is available from the official RealSense online store. I recommend googling for realsense ros ncs2 to see whether an Intel NCS2 stick may be able to be leveraged in your ROS project. https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html |
Hi Marty, We reflashed our nano os and rebuilt everything with cuda support We had some weird results that ill try to detail: Prior Setup:
Current Setup:
My expectations from going over the issues on similar topic in realsense ros, were compression would still be done on CPU and filters like coloriser, point cloud, etc would be on GPU. Any suggestions on where i can look into to get to the bottom of this? |
If CUDA support is enabled then pointclouds and depth-color alignment should be processed on the GPU. Compression would be a CPU activity. Technical detail about the color conversion support for CUDA is provided at #1866 As far as I am aware though, a colorizer filter (assigning color shading to depth values) would not be covered by the range of functions that CUDA provides acceleration for and so would be handled on the CPU. If you are building with CMake then adding the CMake build term BUILD_WITH_OPENMP=true to the librealsense build will enable YUY to RGB conversion and depth-color alignment to take advantage of multiple cores. This can reduce latency at expense of greater CPU utilization. |
Is there some documentation for the working of depth-color alignment function rs_align? I am looking into this since our current format of recording utilises realsense_ros, and we want to understand how far the performance hit would go if we used aligned_depth:=true . (Also rs_aligned would resize the depth image to 1280x720 and BW is a small concern) [I have taken a look at rs_align_example and rs_align_advanced example notebooks in librealsense, and rs_align however they only show how to impement the functions and their outputs, similar to a python demo on the depth_color alignment that removes background beyond a certain distance.] Is there any documentation on the internal workings of rs_align or is it possible to take a look at the code itself?
|
Hi @parthapn98 The full C++ source code for rs-align can be found at the link below, though the descriptive tutorial text is the same as the page that you linked to. https://github.com/IntelRealSense/librealsense/tree/master/examples/align As described in the opening paragraph of the tutorial, algnment will resize the depth field of view (FOV) to match the size of the color field of view. As the color sensor FOV on D435 / D435i is smaller than the depth FOV, this can mean that the outer regions of the depth image will be excluded from the aligned image. Since your programming language is listed as Python at the top of this case, the SDK's Python example program align_depth2color.py may be a more useful reference. My understanding is that the align_depth function in the RealSense ROS wrapper makes use of the RealSense SDK's align processing block like examples such as rs-align and align_depth2color.py do. The align processing block will make automatic adjustments for differences in the depth and color resolution. Depth to color alignment is a processing-intensive operation. If you are able to use a computer with Nvidia graphics GPU hardware, such as an Nvidia Jetson board or a PC with Nvidia graphics, then you could make use of the RealSense SDK's ability to be built with CUDA support for acceleration of color conversion, alignment and pointclouds. The ROS wrapper is able to take advantage of CUDA acceleration if librealsense and the ROS wrapper are built separately instead of built together from packages with the wrapper's Method 1 installation method (as the Method 1 packages do not provide CUDA support). #2670 has CUDA performance figures relating to align acceleration. |
My understanding is that the processing blocks are handled by the SDK file rs-processing.hpp: And the SDK file align.cpp includes rs_processing.hpp: https://github.com/IntelRealSense/librealsense/blob/master/src/proc/align.cpp Aside from the rs-align tutorial information, the other main sources of offical C++documentation for the align process are here: https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1align.html https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#section-frame-alignment The pyrealsense2 version of the align process documentation is here: https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.align.html #5743 may may a helpful reference in regard to alignment and reprojection. |
Thank you Marty, the links have mostly answered my questions. Ill go through the cpp file to understand the exact implementations in the future Just glancing over the code i have some initial questions about how the extrinsics from left IR frame to rgb frame is calculated. Thanks for the help again |
The transform between two stream types can be retrieved with the SDK instruction get_extrinsics_to https://dev.intelrealsense.com/docs/api-how-to#section-get-and-apply-depth-to-color-extrinsics
|
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
How can i query the serial number from a bag file recording? {I can get the serial number from a device if its connected, i.e. rs.context.devices, but our setup requires us to id a camera after the recording was done to know the recording viewpoint.
Our additional cameras are still en route, so I havent tested this yet. But is it possible to record multiple cameras, either in a single or separate bag files, from the realsense viewer or in python script using enable_record_to_file()
Thanks!
The text was updated successfully, but these errors were encountered: