-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental speech streaming for LMNT (useChat/useCompletion React) #922
Conversation
This is exciting |
This is awesome! |
@lgrammel Hi Lars, I tried to test this one locally with no luck, it is showing this error:
I go to node_modules/ai and I see the function there, not sure if I need to do anything else. (I cloned the fork, checkout to the branch and run the example) It is ready to test? Thanks! |
Have you rebuilt the ai package? The easiest way is to just rebuild the whole repository ( |
That did the trick thank you!. I was doing It works really, really fast. I hope we can get this merged very soon. |
Hi @MaxLeiter! did you have a chance to take a look? |
Hello @lgrammel I saw that you changed from eleven labs to LMNT, there is a technical reason for this, eleven labs supports multi languages, LMNT still has no plans to launch this, wouldn't it be interesting to keep both options? Thank you and congratulations for the excellent work |
Thanks. We want to use the official elevenlabs node SDK, but it does not support duplex streaming yet: elevenlabs/elevenlabs-js#4 In the meantime, you could use modelfusion elevenlabs with the adapter that I had in an earlier version of this PR. |
@lgrammel Hi! I can't find the example app for speech streaming in the Vercel AI SDK repo. where it's gone? |
this feature has not been merged yet |
Hi @MaxLeiter Can you merge this? |
Hi @MaxLeiter Can you please approve this? |
bump |
we could really use this as well 🙏 thank you so much for the work on this |
const speech = new Speech(process.env.LMNT_API_KEY || 'no key'); | ||
|
||
// Note: The LMNT SDK does not work on edge yet (as of v1.1.2) | ||
// export const runtime = 'edge'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lgrammel FYI @kaikato just merged a short README in lmnt-node
describing how we got this working -- if you see an even better way let us know, but with the one change to the next.config.js
file it should work with edge
. lmnt-com/lmnt-node#32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought to add that when we were hacking on vercel/ai-chatbot#151 I did see issues that looked like a challenge re: websockets staying alive on edge
and so for deployment I switched to nodejs
and didn't look further at the time. You can see the deployment focused work I did atop that PR here: https://github.com/shaper/lmnt-ai-chatbot/commits/main/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lgrammel @MaxLeiter When TTS will come?
Any reason why this is closed? TTS is a great feature to have |
Would be cool to see TTS added with the addition of gpt-4o |
@lgrammel / @MaxLeiter |
we are very excited about this! |
Summary
Adds speech streaming to
useChat
anduseCompletion
withstreamData
.useCompletion
&useChat
(for React) provide aexperimental_speechUrl
that can be used htmlaudio
elementsexperimental_forwardLmntSpeechStream
streamData.experimental_appendSpeech
: add speech stream chunks to data stream (used automatically through forward functions)examples/next-lmnt
: LMNT completion & chat speech streamingexperimental_forwardLmntSpeechStream
Notes
v1.1.2
)