is mps supported on whisper? #984
Unanswered
khalid4294
asked this question in
Q&A
Replies: 2 comments 7 replies
-
Not an mps user myself, but here is a discussion of a similar unresolved issue over in PyTorch, fyi |
Beta Was this translation helpful? Give feedback.
3 replies
-
Hey! You should be able to use mps with the Hugging Face implementation: import torch
from transformers import pipeline
from datasets import load_dataset
pipe = pipeline(
"automatic-speech-recognition",
model="openai/whisper-tiny",
chunk_length_s=30,
device=torch.device("mps"),
)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = ds[0]["audio"]
prediction = pipe(sample.copy())["text"]
# we can also return timestamps for the predictions
prediction = pipe(sample, return_timestamps=True)["chunks"] |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey!
I've tried using Whisper with device=mps with no luck, I ran into different issues and I couldn't find anything helpful online.
When I run any operation on mps, it works fine, but with Whisper I get different errors. Here's my implementation:
model = whisper.load_model("tiny.en", device="mps")
segment = model.transcribe(vid_url, fp16=False)
And I get these different errors:
resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Error: buffer is not large enough. Must be 19200 bytes'
Can you guide on this?
Beta Was this translation helpful? Give feedback.
All reactions