-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds support for memory and prompt passing to a ChatConversationalAgent #374
Adds support for memory and prompt passing to a ChatConversationalAgent #374
Conversation
the refactor to use If thats the case, this should be the interface we promote, correct? if so, maybe lets update the examples to reflect that |
humanMessage = SUFFIX, | ||
outputParser = new AgentOutputParser(), | ||
} = args ?? {}; | ||
const systemMessage = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the prefix/suffix helpful here? its just same as system/human right? seems confusing to have two with same name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment block above that line may be wrong (maybe copy-pasted from elsewhere?) but they implied that args
should contain:
* @param args.suffix - String to put after the list of tools.
* @param args.prefix - String to put before the list of tools.
I'm a bit new to Langchain's/prompt terminology in general so prefix/suffix
seemed clearer but I can update the comment block and change the parameter names to systemMessage
and humanMessage
instead if that's preferred. Will update it tomorrow!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I've been playing with LangChain for a bit longer, I feel like we should deprecate these systemMessage
and humanMessage
properties to be in line with the other agents, or deprecate prefix
on suffix
on the other agents to avoid confusion here. I changed it back to restart a discussion.
Yes, the existing method had the following signature: export const initializeAgentExecutor = async (
tools: Tool[],
llm: BaseLanguageModel,
agentType = "zero-shot-react-description",
verbose = false,
callbackManager: CallbackManager = getCallbackManager()
): Promise<AgentExecutor> And tacking on more parameters at the end seemed like the wrong approach. I can update this PR with some docs updates tomorrow! |
Hi ! I'm looking for a way to build a chat who can use tools with a memory in a database. Thanks ! |
…e arguments to createPrompt method in ChatConversationalAgent
@hwchase17 I've updated the docs and changed the parameter names as requested, and found out how to get rid of the ugly @GautierT I also updated the |
Ok, merged in I'm also very open to suggestions if someone has a better name than |
The formatting check is passing on my end - is it possible to rerun it? Might be a flake or have run on a previous commit. |
If you look close it says you need to run prettier on
|
I was trying it with |
If I do yarn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jacoblee93! Overall lgtm, just a small comment and need to fix formatting
…feature_custom_prompt_agent
Ah, yes! Fixed. Also fixed a typo in the Zapier docs page I noticed.
Strangely, pushing this unrelated docs fix triggered the CI again and it passed this time without issue 🤷. |
Thanks for adding this in @jacoblee93. I spoke with @nfcampos and @hwchase17 and we think it makes the most sense to hold off on merging this in for a few days -- we're actively making some changes to the callback manager which will affect this PR. Also, it would be good to deprecate the existing method to avoid having multiple ways to construct an agent. We can take over from here. Appreciate it! |
Ok, sounds good! |
langchain/src/agents/initialize.ts
Outdated
tools: Tool[], | ||
llm: BaseLanguageModel, | ||
options: { | ||
agentType?: string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add returnIntermediateSteps
langchain/src/agents/initialize.ts
Outdated
prefix: options.prompt, | ||
}), | ||
tools, | ||
returnIntermediateSteps: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make the default false
…feature_custom_prompt_agent
…ions options, change prompt option to prefix, remove memory default
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Someone is attempting to deploy a commit to the LangChain Team on Vercel. A member of the Team first needs to authorize it. |
systemMessage?: string; | ||
/** String to put before the list of tools. */ | ||
/** DEPRECATED: String to put before the list of tools. */ | ||
humanMessage?: string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opted to deprecate these properties to be in line with the other agents - all prompt args should probably all inherit from the same type anyway, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So these have different arg names because they are designed to work with chat models, instead of string LLMs as the other agents. Hence the different arg names. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok understood, thanks for the context. What would you think of subclassing this new type of agent into a ChatAgent
? We could also make memoryKey
configurable maybe to remove the hidden chat_history
dependency?
I'll revert it to the way it was, we could also scope that/some other solution to a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a bit more discussion here: #374 (comment)
langchain/src/agents/initialize.ts
Outdated
returnIntermediateSteps, | ||
verbose, | ||
callbackManager, | ||
memory: options.memory, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nfcampos I removed the memory default here to avoid a special case, but don't love the fact that we need to initialize memory with specific options like memoryKey: "chat_history"
. Would it make sense to make a new wrapper memory class for use with this agent with defaults that work with it?
Or even just change the default memory key in BufferMemory
to chat_history
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This agent is "special" in that it actually is designed to work with memory, the other ones aren't. I agree the chat_history
hidden requirement is bad, but we do need to default here to creating a memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha. I'll add it back in - I'll throw an error if a user attempts to pass memory into the initializeAgentExecutorWithOptions
method for other agent types in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also changed the options input syntax for initializeAgentExecutorWithOptions
to take a promptArgs
option that will just get passed through directly to fromLLMAndTools
, so we don't need to worry about conflating systemMessage
and prefix
- the user can look at the specific agent they're using and figure out which one is appropriate.
Longer term, subclassing at least the prompt args could make sense to me so we could do something like:
promptArgs?: AgentCreatePromptArgs | ChatAgentCreatePromptArgs;
ChatAgentCreatePromptArgs
could also take a memory key too and decouple the hidden reliance on a memoryKey
set to chat_history
.
…feature_custom_prompt_agent
… accepted by the agent specfied
@jacoblee93 I've updated this to reflect latest changes in main, this will be merged now. Thanks a lot for this, this makes things a lot better |
Sweet let's go! Thanks for the review + fixups @nfcampos |
…nt (langchain-ai#374) * Use OpenAIChat class to initialize gpt-4 models * Formatting * Adds support for passing a prompt to a ChatConversationalAgent * Formatting * Properly set memory when passed into AgentExecutor constructor, rename arguments to createPrompt method in ChatConversationalAgent * Formatting * Update docs and tests to use new intializeAgentExecutorWithOptions method * Formatting * Update docs * Add returnIntermediateSteps, suffix to initializeAgentExecutorWithOptions options, change prompt option to prefix, remove memory default * Remove test timeout * Revert renames, throw error if memory is passed in for unsupported agent types * Fix import * Do some tsc wizardry to make the initialise function only accept args accepted by the agent specfied * Lint * Update examples added since * Fix int tests --------- Co-authored-by: Jacob Lee <jacob@autocode.com> Co-authored-by: Nuno Campos <nuno@boringbits.io>
This PR adds a new method for initializing agent executors,
initializeAgentExecutorWithOptions
, which takes aTool[]
parameter and aBaseLanguageModel
like the existinginitializeAgentExecutor
, but also takes anoptions
object to support additional parameters without breaking existing code.This new method also adds
BufferMemory
to the executor when the agent type is set aschat-conversational-react-description
.We'll now be able to pass a prompt into a
ChatConversationalAgent
and have built in memory like this: