Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Productionize Soccer Stream Detection #2394

Closed
thomshutt opened this issue May 10, 2022 · 4 comments
Closed

Productionize Soccer Stream Detection #2394

thomshutt opened this issue May 10, 2022 · 4 comments
Assignees
Labels

Comments

@thomshutt
Copy link
Contributor

No description provided.

@github-actions github-actions bot added the status: triage this issue has not been evaluated yet label May 10, 2022
@thomshutt thomshutt added Epic and removed status: triage this issue has not been evaluated yet labels May 10, 2022
@cyberj0g cyberj0g self-assigned this May 12, 2022
@cyberj0g
Copy link
Contributor

cyberj0g commented Nov 23, 2022

Done once metrics PR is merged.
To have scene classification on production and staging:

  1. Run the Bs with -metricsPerStream and -detectContent flags
  2. Pass stream configuration from Studio or API
    Livepeer-Transcode-Configuration: {"detection": {"freq": 2, "sampleRate": 10, "sceneClassification": [{"name": "soccer"},{"name":"adult"}]}}'
    
  3. Update sample dashboard with query:
sort_desc(avg by(manifest_id) (livepeer_segment_scene_class_prob{seg_class_name="soccer"} > 0.5))
  1. Configure alerts based on above (to be addressed)

@cyberj0g
Copy link
Contributor

Scene classification is deployed to RKV region.
PR which adds missing dependencies to docker images: #2695
Infra PR: https://github.com/livepeer/livepeer-infra/pull/1134

@cyberj0g
Copy link
Contributor

When planning for enabling content detection for all streams processed on GPUs, we must consider increase in video memory consumption. We already researched that, but worth briefly summarizing again here.

Without content detection

One transcoding session without content detection consumes about 222 Mb of VRAM, and that's uniform. Therefore, to get the video memory dictated max number of transcoding sessions, we can just divide VRAM amount on per-session consumption. For the 8 Gb card, it gives roughly 36 sessions, which is usually above the transcoding performance bottleneck.

With content detection

With content detection enabled, 2400 Mb of VRAM is allocated for CUDA and CuDNN runtime libraries and then shared among all CUDA sessions. There's no known way of reducing that amount. Each transcoding session additionally include the content detection model itself, and adds 350 Mb of VRAM. Thus, the 8 Gb card will be able to run just 16 transcoding sessions, so we may be at risk of getting OOM errors more frequently, if -maxSessions parameter doesn't account for that with regards to pod hardware.

@cyberj0g
Copy link
Contributor

cyberj0g commented Feb 1, 2023

Ready to go. Requires -detectContent on OTs and -metricsPerStream on Bs to identify streams in the monitoring dashboard.

@cyberj0g cyberj0g closed this as completed Feb 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants