-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[msvideo1 @ 0x56401af87180] Specified pixel format yuv420p is invalid or not supported #3
Comments
My guess is this is related to Line 466 in d1f9134
The codec that will work here is system-specific. My suggestion would be to take a look at the OpenCV example on saving video and get it to work on its own. Then copy the settings into respmon. Here's the example I'd use if I were you: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#saving-a-video Please do post back if this doesn't resolve it or if you find settings you think will be more system-agnostic! |
Hey Paula, I believe you'll need to adjust some hyperparameters for birds - their respiration is probably a lot faster than the parameters I tuned to (I did the tuning visually, not using proper tuning methods). You'll find those parameters starting here: https://github.com/kevroy314/respmon/blob/master/base.py#L80 If you get it working, please do post what parameters worked well for you! As far as the core issue you're having, when it drops away like that, it's because it lost tracking of the keypoints and is trying to recalibrate. This usually means there was too much motion or the scene didn't have enough contrast. You can try changing the physical setup (camera distance, lighting, etc). You might also try it out on yourself first to make sure everything is working as expected before trying it on your friend. The calibration step should save a video of what was used for calibration. Once calibration ends, you should get the window you see in your pictures and it should stay on that until it loses signal. That window is what shows the real-time respiration signal. The calibration videos are just for reference to see what your whole video looked like and if you had issues (moving shadows, changing lighting, etc). Let me know if any of that doesn't make sense! The key work that needs to be done on this package to make it more usable is tuning the hyperparameters. I have a plan to collect some videos to do that, but unfortunately, those plans have been put on hold for now. If you manage to collect a data set to tune to, you can use something like Bayesian Optimization to find the right combination of parameters. Hope that helps! |
Okay, so just to clarify. I get a .npy file in the end. The only parameters I changed was the size of the window using self.skip_calibration(x, y, w, h) and it seemed to stay on track (and I cut the video to get only sleeping footage - 7 minute video) but I do not get a playable video file. Could you please confirm how? Or if it is easier, getting a simple graph of the change in BPM and Frequency over the full video time frame would be enough. Best, |
So the .npy file contains the result of the signal processing. So you should be able to open that and graph it. It contains a list of tuples (time and motion value at that time). The BPM peak stuff isn't actually saved in the current version (it's just displayed in that UI). I haven't tested it, but I believe if you save Line 488 in d1f9134
As far as the video saving goes, that's the codec issue at play. It's a non-blocking error so when it tries to save the video, it just does nothing I think. So switching to something supported by your OS should fix that on line Line 467 in d1f9134
If you're on mac, seems you could use a reference like this: https://gist.github.com/takuma7/44f9ecb028ff00e2132e Apologies that none of this is easier to use - I plan to do an overhaul of the code to make it more usable once I get some validation data to determine how well it actually works. Hope that helps! |
Ok cool! I tried it but the modification on line 488 modification does not work. It gives me the following error: Just to clarify, what I see in the .npy are two columns, as you said, one for time and the other for the raw motion values. Do you know what the units of time are? seconds? Do you know if there is any work around to quickly find the BPM for each of the time points? And I did change the 'MSVC' to 'YV12' and it does save a video file (.avi) but it gives me an error when I try to play it. I simply think I need to find the correct codec and it will work. So need to find the right one! Thank you! Thanks again for all these little things! |
Yes, the first value is in seconds, but it's determined from your FPS of your camera. Some cameras will make this less reliable than others so keep in mind that could be a source of noise. If you'd like the raw time stamp, change the From what you sent, it looks like the first frequency component hasn't been measured yet at the beginning. There's a delay before the frequency components actually start being measured. You can work around this by checking if the self.freq list has been populated yet. Something like adding to Line 486 in d1f9134
And yes, I think you'll find a codec that works! Also, consider downloading VLC Media Player. It should support a wider variety of codecs for playback. https://www.videolan.org/ |
Hi there,
I am trying to read a grey scale video (mp4). When running python main.py calibration images appear in my folder but the following error appears at each calibration step:
[msvideo1 @ 0x56401af87180] Specified pixel format yuv420p is invalid or not supported
Could you please indicate which proper video codecs should be installed?
Or if this is due to something else, please let me know.
Thank you for your time!
The text was updated successfully, but these errors were encountered: