-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase frame buffer / improve the way buffering is handled #3798
Comments
Hi,
|
Hi @azhavoro , can you help clarify the following? And apologies if these have been clarified elsewhere
Does the
Just for internal understanding, is this 2GB limit the limit at the client end, or the limit at the server end..? |
Hi, thank you for getting back to me on this. I have done some testing now, and I am still experiencing some issues, although your advice helped ease my problem.
This helps a lot thanks, it still pauses every 72 frames but it is loads faster between time it loads.
I have tried to change these variables but this still loads every 72 frames. I don't think the frame buffer size is the issue here though as each image we are using are at max 1080p and even uncompressed rgb images at that size will not reach 2gb of memory in 72 frames. I am assuming that seeing this is js, this is on the client end, is there anywhere in the server that I can look to see if there I can increase my performance and get a more stable throughput? I have looked in cvat/settings/base.py and changed the
And that made no difference either, still loading every 72 frames. I am however starting to think that it might be related to a hard limit on frames rather than a file size limitation. What makes me say this, is that regardless if the images are uploaded using the 70% compression or not, and whether its 1080p, 720p or even smaller image sizes, its still loading every 72 frames. Thank you for taking time to look at this. |
@mrKallah Hi, you can set the chunk size in the task constructor https://openvinotoolkit.github.io/cvat/docs/manual/basics/creating_an_annotation_task/#chunk-size. Default value for 1080p and lower resolution is 36 frames and seems that in your case the browser has time to download and decode only 2 chunks during playback and must wait for new frames to be ready. |
Thank you, increasing this value has helped significantly I put the value at 128 and now the loading only happens every 256 frames! |
@azhavoro, just one more question, I have been testing some more, and overall its far better with the newer settings, however I am still experiencing some stops and buffering, and also some frame drops when fast forwarding right after a buffer. Is there any way to give more system resources to cvat? I max out on 50% cpu usage when fast forwarding and about 50% of ram too. The server is hosted on the same machine as the client, and even if it was over network I'm on a 1GB up and down connection and everything is installed on M.2 drives, including the storage of the files and the server software. I can't see what the bottleneck here is and so I thought maybe there is something I can do to give cvat more resources? Thanks again for your help so far! |
Here you can see the docker stats command shows that the image draws 100% but that the overall usage for python3 is only around 6% which is one core. Is there any way to get multi-threading to work? |
Probably need to have an easy way to configure these parameters. Need to look at streaming pipeline one more time. Probably there is a room for performance optimization and improving the UX. |
So, I believe we can close the issue. |
My actions before raising this issue
I am trying to increase the amount of buffered frames on a local server run for multiple user annotations, I've tried to find this in the docs and I've asked on glitter, but with no luck. The reason I need to increase the buffer is due to having usually ~1k-10k frames between the end of one annotation to the start of the next. The data I am working with is mostly without the subject I'm trying to detect. It would significantly increase my productivity for the buffer handling to be improved.
Expected Behaviour
The server should have a setting to change the amount of frames buffered so if you are running a dedicated server to annotate, you can use it how you wish.
When the client goes to the next frame, the server should start to send the next buffered frame to the client.
Personally I'd like it if the server would work at 100% CPU when loading frames to minimize the time spent waiting for the server to load frames.
Current Behaviour
For my use-case, the server is idle until the client runs out of frames, then it loads frames using ~25% of CPU or 100% of one core and then it goes idle again until the clients buffer is empty. I can also not find a setting to change the amount of frames buffered.
Possible Solution
Short Term - Add a setting for the buffered frames.
Long Term - Re-write the way the buffering of images is handled and allow for multithreaded frame buffering to utilize parallelism
Steps to Reproduce (for bugs)
Context
This issue drastically decreases productivity of our team when annotating our dataset. We are trying to annotate rare occurrences in usually 1.5h videos, where there is sometimes less than 5 separate annotation tracks. The video needs to be relatively high resolution due to the nature of what we are trying to detect. Thus we are stuck spending a lot of time waiting for the machine to load the next frames.
Your Environment
git log -1
): e8b3284docker version
(e.g. Docker 17.0.05): 20.10.9#docker.compose.overide.yml
docker.compose.yml
Logs from `cvat` container
Next steps
You may join our Gitter channel for community support.
The text was updated successfully, but these errors were encountered: