Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Piping responses to create a chain of prompt execution #4

Open
varghese-ascalonic opened this issue Apr 7, 2024 · 0 comments
Open

Comments

@varghese-ascalonic
Copy link
Member

varghese-ascalonic commented Apr 7, 2024

A major feature where as a user I should be able to create multiple prompt files where response of one prompt is piped into the input of another prompt and into another and so on.

Here is an expected prompt system that generates a motivational quote and we pass it into another prompt which verifies whether some fact is correct or incorrect

quote.prompt

inputs:
  - type: scalar
    name: author
engine: gpt-4-turbo-preview
role: You are a helpful assistant designed to output motivational quotes
prompt: "Generate a motivational quote by {{author}}"

output: motivationalQuote #store the output in a variable

askNext: verify.prompt. #specify the next prompt file to execute

verify.prompt

inputs:
  - type: scalar
    name: motivationalQuote #this prompt should be able to access the global variables

role: You are a helpful assistant designed to verify a quote was said by the author or not
prompt: "Was {{motivationalQuote}} really written by the author? output yes or no"

output: motivationalQuote #we pass this from the prompt so that it can be received by the application

- type: response
  expected: [ yes ]

This is an absolute f*ckfest of a feature and needs some serious modification to the code and tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant