-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue Deploying Pipeline with LLMs in It #38
Comments
Somewhat similar issue with another component ChatPromptBuilder #36 |
@indranilr please try to remove The error I personally used instead of deploy_utils
In other words, I disregarded completely |
@EdIzaguirre this should be fixed by #41 (merged). Can you confirm? Thanks! |
@EdIzaguirre closing this, I've tested with the same pipeline and now everything is working using |
Hello,
I'm encountering an issue when trying to deploy a RAG (Retrieval-Augmented Generation) application using Hayhooks and the Haystack framework. I have two pipelines:
postgres_indexing
andpostgres_query
. Thepostgres_indexing
pipeline deploys without any problems, but when I attempt to deploy thepostgres_query
pipeline and then view the Swagger documentation, I receive the following error:pydantic.errors.pydanticinvalidforjsonschema: Cannot generate a JsonSchema for core_schema.CallableSchema
Detailed Error Traceback:
Initially thought the issue was with the streaming_callback in the OpenAIGenerator component (llm), so I removed it entirely from both the code and the YAML configuration. For reference, here is my YAML file:
and here is my Python code that generated it:
What is going on? Why am I having trouble with the querying pipeline? I've narrowed it down to the LLM; when removing this component from the pipeline the deployment goes smoothly. Appreciate the help in advance.
The text was updated successfully, but these errors were encountered: