Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recording multiple clips consecutively #2424

Closed
upxela opened this issue Sep 20, 2018 · 14 comments
Closed

Recording multiple clips consecutively #2424

upxela opened this issue Sep 20, 2018 · 14 comments
Assignees

Comments

@upxela
Copy link

upxela commented Sep 20, 2018

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D400
Firmware Version 5.10.3
Operating System & Version Win 10
Platform PC
SDK Version 2.14.0
Language python2.7

Issue Description

I am looking to recording many shorter, consecutive videos (~2 minutes in length) rather than one large video (which may amount to 24+ hours). Therefore it is desirable that there is not much lost between videos. Unfortunately my code that between closing a pipeline and starting the pipeline again, it takes approximately 0.6 seconds, which is not optimal for my application. Does anyone have any suggestions for workarounds for this?

@MartyG-RealSense
Copy link
Collaborator

If you are recording rosbags, ROS has a --split command that splits the bag when a maximum duration or file size is reached.

Examples:

$ rosbag record --split --size=1024 /chatter
$ rosbag record --split --duration=30 /chatter
$ rosbag record --split --duration=5m /chatter
$ rosbag record --split --duration=2h /chatter

http://wiki.ros.org/rosbag/Commandline#record

@upxela
Copy link
Author

upxela commented Sep 20, 2018

@MartyG-RealSense Thanks for the reply! Do you know how I would use this in tandem with the realsense api, specifically the pipeline start/close commands?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 20, 2018

My experience with ROS is limited and one of the Intel team can give better advice on it. You should be able to use the SDK's ROS wrapper though to use the SDK and ROS together.

https://github.com/intel-ros/realsense/releases

@upxela
Copy link
Author

upxela commented Sep 20, 2018

It looks like this method requires Linux. Are you aware of any Windows-friendly methods to obtain the same result?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 20, 2018

There is a Windows version of the newest ROS 2, though it likely would not be able to talk to the SDK API through the ROS wrapper.

https://github.com/ros2/ros2/wiki/Windows-Install-Binary

ROS2 can be made compatible with the 400 Series cameras.

https://communities.intel.com/message/543051#543051

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 20, 2018

An easier solution may be to try the developer UnaNancyOwen's recording program for SDK 2.0 to see if that gives you smaller file sizes.

https://github.com/UnaNancyOwen/RealSense2Sample/tree/master/sample/Record

Some developers also capture the stream as a sequence of PNG images instead of a bag to reduce file size.

@upxela
Copy link
Author

upxela commented Sep 21, 2018

Thanks for the suggestions @MartyG-RealSense. As of right now, file size is not the main problem; the point of recording multiple shorter films is such that in the case of a file corruption, there is not as much of an impact if only 2 minutes are lost compared to 24 hours.

Additionally, I have seen problems with frame drops after 30 minutes when streaming full depth resolution with 4 cameras concurrently, which brings about the idea of using shorter clips to prevent as big of an impact should it happen during an important recording session.

@MartyG-RealSense
Copy link
Collaborator

Intel's recent webinar on multiple cameras stated that if you position your four cameras so that their fields of view overlap then it provides more comprehensive depth data, because there is more redundancy in the data. If your four cameras are looking at approximately the same area, I wonder if this redundancy would help to fill gaps if one or more cameras drop some frames.

@upxela
Copy link
Author

upxela commented Sep 21, 2018

Unfortunately, each camera in my application would be part of its own "system," i.e. camera 1 would not record anything that camera 2 is recording, etc. Therefore it is important for me to obtain a seamless video from each camera.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 21, 2018

It's a tricky situation you're dealing with. The SDK has a function called Keep(), where you can store frames in memory until the end of streaming and then save them, and do some post-processing on the frames too if you wish (though this adds to processing time). This way, you may lose less frames than if you are recording the frames as you go.

Keep() is best suited to short recording sequences, so it may be a good fit for how you are recording the data in multiple chunks/

Another option may be to use an industrial-grade USB hub and / or industrial-grade USB cable, though these are more expensive then regular USB equipment. Industrial grade equipment is used in applications such as medical scanners, where frame drops cannot be tolerated.

Mains wall-socket powered hubs are also very useful for increasing USB stability. In Intel's multiple-camera white paper, they tested with a cheap AmazonBasics hub that was AC adapter powered.

@upxela
Copy link
Author

upxela commented Oct 9, 2018

@MartyG-RealSense frame dropping is currently not as big of an issue as just being able to record consecutive videos. I realized that to close a pipeline and restart can take up to approximately 1.5 seconds (~0.5 seconds to start, ~1 second to close), which is too long of a pause for my application.

Is there a way that I can redirect the file destination of my camera recording without closing the pipeline? I know that you previously mentioned using ROS, but I can't find any documentation that explains how to use it in tandem with realsense.

@MartyG-RealSense
Copy link
Collaborator

I carefully went over the code of the SDK's record and playback sample program.

https://github.com/IntelRealSense/librealsense/tree/master/examples/record-playback

Whilst it may be possible to change the path with code by having the filename as a number (e.g 1.bag) and incrementing it by 1 with a command, I could not see a way to start another file without first stopping the pipeline. It is the stopping of the pipeline that releases currently held resources, including the bag file that was being recorded.

One of the Intel guys may be able to offer further suggestions.

@upxela
Copy link
Author

upxela commented Oct 10, 2018

Thanks for the insight @MartyG-RealSense! Hopefully someone from Intel can find a workaround.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Ticket being closed due to inactivity for 30+ days

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants