Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for memory and prompt passing to a ChatConversationalAgent #374

Merged
merged 24 commits into from
Apr 17, 2023

Conversation

jacoblee93
Copy link
Collaborator

@jacoblee93 jacoblee93 commented Mar 18, 2023

This PR adds a new method for initializing agent executors, initializeAgentExecutorWithOptions, which takes aTool[] parameter and a BaseLanguageModel like the existing initializeAgentExecutor, but also takes an options object to support additional parameters without breaking existing code.

This new method also adds BufferMemory to the executor when the agent type is set as chat-conversational-react-description.

We'll now be able to pass a prompt into a ChatConversationalAgent and have built in memory like this:

import { initializeAgentExecutorWithOptions } from "langchain/agents";
const model = new OpenAIChat({
  openAIApiKey: process.env.OPENAI_API_KEY,
  modelName: process.env.OPENAI_MODEL_NAME
});
const tools = [new Calculator()];
const executor = await initializeAgentExecutorWithOptions(
  tools,
  model,
  {
    agentType: "chat-conversational-react-description",
    verbose: true,
    prompt: `Answer the following questions as best you can, but speaking as a pirate might speak.`
  }
);

const result0 = await executor.call({input: "How's your day going?"});

// result0.output
// Arr, me day be goin' well, matey! What be ye needin' help with?

const result1 = await executor.call({input: "What is 9 raised to the second power?"});

// result1.output
// Arr, 9 raised to the second power be 81, matey!

const result2 = await executor.call({ input: "What is that result times 2?" });

// result2.output
// Arr, the result of 9 raised to the second power times 2 be 162, matey!

@jacoblee93 jacoblee93 changed the title Adds support for passing a prompt to a ChatConversationalAgent Adds support for memory and passing a prompt to a ChatConversationalAgent Mar 18, 2023
@jacoblee93 jacoblee93 changed the title Adds support for memory and passing a prompt to a ChatConversationalAgent Adds support for memory and prompt passing to a ChatConversationalAgent Mar 18, 2023
@hwchase17
Copy link
Collaborator

the refactor to use Options is done to allow for more optional params in a more elegant way?

If thats the case, this should be the interface we promote, correct? if so, maybe lets update the examples to reflect that

humanMessage = SUFFIX,
outputParser = new AgentOutputParser(),
} = args ?? {};
const systemMessage =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the prefix/suffix helpful here? its just same as system/human right? seems confusing to have two with same name

Copy link
Collaborator Author

@jacoblee93 jacoblee93 Mar 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment block above that line may be wrong (maybe copy-pasted from elsewhere?) but they implied that args should contain:

   * @param args.suffix - String to put after the list of tools.
   * @param args.prefix - String to put before the list of tools.

I'm a bit new to Langchain's/prompt terminology in general so prefix/suffix seemed clearer but I can update the comment block and change the parameter names to systemMessage and humanMessage instead if that's preferred. Will update it tomorrow!

Copy link
Collaborator Author

@jacoblee93 jacoblee93 Mar 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I've been playing with LangChain for a bit longer, I feel like we should deprecate these systemMessage and humanMessage properties to be in line with the other agents, or deprecate prefix on suffix on the other agents to avoid confusion here. I changed it back to restart a discussion.

@jacoblee93
Copy link
Collaborator Author

the refactor to use Options is done to allow for more optional params in a more elegant way?

If thats the case, this should be the interface we promote, correct? if so, maybe lets update the examples to reflect that

Yes, the existing method had the following signature:

export const initializeAgentExecutor = async (
  tools: Tool[],
  llm: BaseLanguageModel,
  agentType = "zero-shot-react-description",
  verbose = false,
  callbackManager: CallbackManager = getCallbackManager()
): Promise<AgentExecutor>

And tacking on more parameters at the end seemed like the wrong approach. I can update this PR with some docs updates tomorrow!

@GautierT
Copy link

Hi ! I'm looking for a way to build a chat who can use tools with a memory in a database.
Seems like this PR can help !
But how to handle the memory when multiple user connect to the same API but each user have his "own" memory ?
Maybe we should be able to set the user id has the memoryKey ?

Thanks !

@jacoblee93
Copy link
Collaborator Author

jacoblee93 commented Mar 19, 2023

@hwchase17 I've updated the docs and changed the parameter names as requested, and found out how to get rid of the ugly .memory hack. Turned out the AgentExecutor subclass wasn't passing arguments along to the BaseChain superclass:
https://github.com/hwchase17/langchainjs/pull/374/files#diff-59a2c3ddfc3c7e38be163dd2b5f2a48f76da2b51370b98755cf6a24a2d290c70R42

@GautierT I also updated the initializeAgentExecutorWithOptions method to allow you to pass in a custom memory arg to override the default settings - does that fit your use case?

https://github.com/hwchase17/langchainjs/pull/374/files#diff-7d801aff4f2d252eed87091afa8bb799fa2fde06e1cf8315078c8d3ce86a5e49R52

@jacoblee93
Copy link
Collaborator Author

jacoblee93 commented Mar 19, 2023

Ok, merged in main as well. @hwchase17 PTAL when you get some time!

I'm also very open to suggestions if someone has a better name than initializeAgentExecutorWithOptions

@jacoblee93
Copy link
Collaborator Author

The formatting check is passing on my end - is it possible to rerun it? Might be a flake or have run on a previous commit.

@jacobrosenthal
Copy link
Contributor

If you look close it says you need to run prettier on

docs:format:check: cache miss, executing 01977ba0ac428b4d
langchain:format:check: src/agents/chat_convo/index.ts

@jacoblee93
Copy link
Collaborator Author

If you look close it says you need to run prettier on

docs:format:check: cache miss, executing 01977ba0ac428b4d
langchain:format:check: src/agents/chat_convo/index.ts

I was trying it with yarn format:check as specified in CONTRIBUTING.md but didn't see any warnings or alerts - could definitely be something wrong with my setup though:

Screenshot 2023-03-19 at 6 09 41 PM

@jacobrosenthal
Copy link
Contributor

If I do yarn format:check in your branch which is what that test runs, it shows the error and do a yarn format it does change it

Copy link
Collaborator

@agola11 agola11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jacoblee93! Overall lgtm, just a small comment and need to fix formatting

@jacoblee93
Copy link
Collaborator Author

Thanks @jacoblee93! Overall lgtm, just a small comment and need to fix formatting

Ah, yes! Fixed. Also fixed a typo in the Zapier docs page I noticed.

If I do yarn format:check in your branch which is what that test runs, it shows the error and do a yarn format it does change it

Strangely, pushing this unrelated docs fix triggered the CI again and it passed this time without issue 🤷.

@agola11
Copy link
Collaborator

agola11 commented Mar 20, 2023

Thanks for adding this in @jacoblee93. I spoke with @nfcampos and @hwchase17 and we think it makes the most sense to hold off on merging this in for a few days -- we're actively making some changes to the callback manager which will affect this PR. Also, it would be good to deprecate the existing method to avoid having multiple ways to construct an agent. We can take over from here. Appreciate it!

@jacoblee93
Copy link
Collaborator Author

Ok, sounds good!

tools: Tool[],
llm: BaseLanguageModel,
options: {
agentType?: string;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add returnIntermediateSteps

prefix: options.prompt,
}),
tools,
returnIntermediateSteps: true,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make the default false

Jacob Lee added 2 commits March 27, 2023 17:59
…ions options, change prompt option to prefix, remove memory default
@vercel
Copy link

vercel bot commented Mar 28, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchainjs-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 17, 2023 4:23pm

@vercel
Copy link

vercel bot commented Mar 28, 2023

Someone is attempting to deploy a commit to the LangChain Team on Vercel.

A member of the Team first needs to authorize it.

systemMessage?: string;
/** String to put before the list of tools. */
/** DEPRECATED: String to put before the list of tools. */
humanMessage?: string;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opted to deprecate these properties to be in line with the other agents - all prompt args should probably all inherit from the same type anyway, no?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So these have different arg names because they are designed to work with chat models, instead of string LLMs as the other agents. Hence the different arg names. Thoughts?

Copy link
Collaborator Author

@jacoblee93 jacoblee93 Mar 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok understood, thanks for the context. What would you think of subclassing this new type of agent into a ChatAgent? We could also make memoryKey configurable maybe to remove the hidden chat_history dependency?

I'll revert it to the way it was, we could also scope that/some other solution to a future PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a bit more discussion here: #374 (comment)

returnIntermediateSteps,
verbose,
callbackManager,
memory: options.memory,
Copy link
Collaborator Author

@jacoblee93 jacoblee93 Mar 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nfcampos I removed the memory default here to avoid a special case, but don't love the fact that we need to initialize memory with specific options like memoryKey: "chat_history". Would it make sense to make a new wrapper memory class for use with this agent with defaults that work with it?

Or even just change the default memory key in BufferMemory to chat_history?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This agent is "special" in that it actually is designed to work with memory, the other ones aren't. I agree the chat_history hidden requirement is bad, but we do need to default here to creating a memory

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha. I'll add it back in - I'll throw an error if a user attempts to pass memory into the initializeAgentExecutorWithOptions method for other agent types in that case.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also changed the options input syntax for initializeAgentExecutorWithOptions to take a promptArgs option that will just get passed through directly to fromLLMAndTools, so we don't need to worry about conflating systemMessage and prefix - the user can look at the specific agent they're using and figure out which one is appropriate.

Longer term, subclassing at least the prompt args could make sense to me so we could do something like:

promptArgs?: AgentCreatePromptArgs | ChatAgentCreatePromptArgs;

ChatAgentCreatePromptArgs could also take a memory key too and decouple the hidden reliance on a memoryKey set to chat_history.

@nfcampos
Copy link
Collaborator

@jacoblee93 I've updated this to reflect latest changes in main, this will be merged now. Thanks a lot for this, this makes things a lot better

@jacoblee93
Copy link
Collaborator Author

Sweet let's go! Thanks for the review + fixups @nfcampos

@nfcampos nfcampos merged commit 14f7719 into langchain-ai:main Apr 17, 2023
RohitMidha23 pushed a commit to RohitMidha23/langchainjs that referenced this pull request Apr 18, 2023
…nt (langchain-ai#374)

* Use OpenAIChat class to initialize gpt-4 models

* Formatting

* Adds support for passing a prompt to a ChatConversationalAgent

* Formatting

* Properly set memory when passed into AgentExecutor constructor, rename arguments to createPrompt method in ChatConversationalAgent

* Formatting

* Update docs and tests to use new intializeAgentExecutorWithOptions method

* Formatting

* Update docs

* Add returnIntermediateSteps, suffix to initializeAgentExecutorWithOptions options, change prompt option to prefix, remove memory default

* Remove test timeout

* Revert renames, throw error if memory is passed in for unsupported agent types

* Fix import

* Do some tsc wizardry to make the initialise function only accept args accepted by the agent specfied

* Lint

* Update examples added since

* Fix int tests

---------

Co-authored-by: Jacob Lee <jacob@autocode.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Change initializeAgentExecutor to by default not set returnIntermediateSteps to true, add kwargs
6 participants