You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Much like visual and auditory sequences, it would be helpful if remotion also exported the time stamped subtitle file that would allow for non-baked-subtitles.
Excuse me if this is already implemented or in the roadmap, but I didn't find anything from searching and I'm quite new to using remotion (but am very excited)
Feature Request 🛍️
For instance, for any audio component I add to play from point x to point y, a complimentary subtitle and caption file could also be included to play with the audio. This would maintain the modularity and declarative/programmatic patterns that remotion offers.
Use Case
Of course accessibility for the deaf and hard of hearing, mobile users who don't have sound on, and in cases where someone is speaking with a thick accent/in noisy conditions which may be hard for people to understand.
Possible Solution
Without looking into the source code or having used remotion a lot, I would expect a <Subtitle> and/or <Caption> component with an api that includes the text as well as the timing. I would expect many other api options could be exposed
Here's more info about the WebVTT API. I would think that for each composition, it would hold some state which keeps track of all the <Subtitle> or <Caption> component text/timestamps/voices/etc which are listed in its children. That way we would get a per-composition vtt file to add to the player HTML.
The text was updated successfully, but these errors were encountered:
Thanks a lot for opening the request and I totally see this feature on our roadmap too! 😁
As I am working on releasing Lambda, I will not immediately tackle subtitles, so any help is appreciated on this! I can help with implementation questions. I also see that a component that you can place anywhere would be a cool API. And that we will pick up during rendering, similarly to as we pick up Audio tags that are placed and are using FFMPEG to bake in the subtitles or allow emitting a subtitle file. I see you are already on Discord so if you like to work on it we can have a chat there! 🙂
I am linking a few related issues for reference too: #16#356
Much like visual and auditory sequences, it would be helpful if remotion also exported the time stamped subtitle file that would allow for non-baked-subtitles.
Excuse me if this is already implemented or in the roadmap, but I didn't find anything from searching and I'm quite new to using remotion (but am very excited)
Feature Request 🛍️
For instance, for any audio component I add to play from point x to point y, a complimentary subtitle and caption file could also be included to play with the audio. This would maintain the modularity and declarative/programmatic patterns that remotion offers.
Use Case
Of course accessibility for the deaf and hard of hearing, mobile users who don't have sound on, and in cases where someone is speaking with a thick accent/in noisy conditions which may be hard for people to understand.
Possible Solution
Without looking into the source code or having used remotion a lot, I would expect a
<Subtitle>
and/or<Caption>
component with an api that includes the text as well as the timing. I would expect many other api options could be exposedhttps://developer.mozilla.org/en-US/docs/Web/Guide/Audio_and_video_delivery/Adding_captions_and_subtitles_to_HTML5_video
Here's more info about the WebVTT API. I would think that for each composition, it would hold some state which keeps track of all the
<Subtitle>
or<Caption>
component text/timestamps/voices/etc which are listed in its children. That way we would get a per-composition vtt file to add to the player HTML.The text was updated successfully, but these errors were encountered: