-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding cli script to process a video as well as corresponding config file #16
Adding cli script to process a video as well as corresponding config file #16
Conversation
Thank you. I have added a comment. Can you please check? |
Unclear about your comment. Cannot see it on my side. Can you guide me to where to find it? |
process_video.py
Outdated
last_frame=(current_frame_index == total_frame_count - 1), | ||
path_to_image=None) | ||
|
||
check_to_clear_non_permanent_cuda_memory(processor=processor, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed? Is OOM a problem without this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see an actual OOM. Which is very nice that that processing do not crash.
But I see issues when max GPU memory is reached.
For context I'm working on Windows 11, using a GeForce GTX 1060 6GB. Testing on a ~1 min (1628 frames) Full HD video.
With out the memory clean up the full video FPS is 3.6.
The ~1200 frames processes smoothly and even, but the I can observ that GPU mem max is reached and processing slows down and is herafter done in bursts very slowly. Seems like some kind of memory swapping happening. But just a guess; I don't know.
With the memory clean up the full video FPS is 8.8.
And all processing runs smoothly and even for all frames. On a GPU mem graph I can observ a drop when the clean up kicks in. But processing time do not really seem to be affected.
The clean up step was inspired by the interactive UI fuctionallity to do the same manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I don't think PyTorch supports GPU memory swapping so it might just be PyTorch greedily grabbing too much memory. Cleaning up the memory sometimes does improve the speed but might lead to non-intuitive failure (e.g., in some more challenging tracking scenarios). Is it possible to turn this feature into a command line option that is defaulted off?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will make the memory cleaning optional with a command line arg.
But will first have time to do this end of next week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have now added a command line option arg for the cuda memory cleaning. Default disabled.
Let me know what you think.
Oh right, I didn't submit the review. Sorry about that. |
Thank you for your contribution! I will add a reference in the readme. |
Hey @hkchengrex @KaareLJensen I am trying to run the script for 720p video: (720, 1280, 3 using your script. But, it's giving me black mask as output: The mask I am taking is this: |
Hi @charchit7 Sorry for first responding now. Have been away on vacation. But to you question: |
This is PR to add a command line interface script to process a video given one or more masks as input for the video segmentation.
The script is written based on the eval_vos script as well as the Cutie Colab notebook. Not changing existing repo files only using already existing code.
It uses a new video_config.yaml configuration to keep it separated from other configs.
The process_video supports multiple input masks as input.
The script syntax is:
process_video.py [-h] [-v VIDEO] [-m MASK_DIR] [-o OUTPUT_DIR] [-d DEVICE] [--mem_every MEM_EVERY] [--max_internal_size MAX_INTERNAL_SIZE]
Only VIDEO, MASK_DIR and OUTPUT_DIR are required input.