You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Proven experience working with deep learning frameworks such as PyTorch, particularly in implementing attention mechanisms and optimising model performance.
Wait for Review: Our team will review expressions of interest and select the best candidate.
Get Assigned: If selected, we'll contact you and assign the bounty to you.
Start Working: Dive into your task! If you need assistance or guidance, join the discussions in the #developer-lounge channel on our [Discord server](https://discord.gg/livepeer).
Submit Your Work: Create a pull request in the relevant repository and request a review.
Notify Us: Ping us on Discord when you’re pull request is ready for review.
Receive Your Bounty: We'll arrange the bounty payment once your pull request is approved.
Overview
We have identified an opportunity to improve the current
[audio-to-text](https://github.com/livepeer/go-livepeer/pull/3078/)
pipeline in Livepeer AI Network by enabling[flash-attention](https://arxiv.org/abs/2307.08691/)
that will speed up the pipeline significantly allowing for faster and almost realtime operation. We are seeking the community and bounty hunters support to quickly implement this optimisation so it can be available to developers working with Livepeer.Problem
Implementing improved
flash_attention
toaudio-to-text
models in Livepeer AI Network.Desired Solution
Improvement in speed of the model execution for
audio-to-text
pipeline.Bounty Requirements
Applicant Requirements
Scope Exclusions
Implementation Tips
audio-to-text
pipeline.Additional Resources
How to Apply
#developer-lounge
channel on our [Discord server](https://discord.gg/livepeer).Contact Information
For questions or clarifications, please contact: [hans@livepeer.org](mailto:hans@livepeer.org)
The text was updated successfully, but these errors were encountered: