streaming #28927
Replies: 1 comment 1 reply
-
I found a similar unsolved discussion about issues with streaming in FastAPI, which might be relevant to your query. You can check it out here [1]. For implementing streaming responses in FastAPI using LangChain, you can use the from fastapi import FastAPI
from langchain_ollama import ChatOllama
from fastapi.responses import StreamingResponse
app = FastAPI()
# Instantiate the chat model
llm = ChatOllama(
model="llama3",
temperature=0.8,
num_predict=256,
# other params ...
)
@app.get("/stream")
async def stream_response():
messages = [("human", "Return the words Hello World!")]
async def event_generator():
for chunk in llm.stream(messages):
yield chunk
return StreamingResponse(event_generator(), media_type="text/event-stream") This code sets up a FastAPI endpoint that streams responses using the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
write a code for streaming a response in fastapi in langraph
Beta Was this translation helpful? Give feedback.
All reactions