From ed318600719f07721240f76f5024c6d45f837045 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jo=C3=A3o=20Moura?= Date: Mon, 1 Apr 2024 11:14:06 -0300 Subject: [PATCH] update docs --- docs/core-concepts/Tasks.md | 2 +- docs/how-to/Human-Input-on-Execution.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/core-concepts/Tasks.md b/docs/core-concepts/Tasks.md index 5ade10878a..656f76dce0 100644 --- a/docs/core-concepts/Tasks.md +++ b/docs/core-concepts/Tasks.md @@ -23,7 +23,7 @@ Tasks in CrewAI can be designed to require collaboration between agents. For exa | **Output Pydantic** *(optional)* | Takes a pydantic model and returns the output as a pydantic object. **Agent LLM needs to be using an OpenAI client, could be Ollama for example but using the OpenAI wrapper** | | **Output File** *(optional)* | Takes a file path and saves the output of the task on it. | | **Callback** *(optional)* | A function to be executed after the task is completed. | -| **Human Input** *(optional)* | Indicates whether the agent should ask for feedback at the end of the task | +| **Human Input** *(optional) - Release Candidate* | Indicates whether the agent should ask for feedback at the end of the task | ## Creating a Task diff --git a/docs/how-to/Human-Input-on-Execution.md b/docs/how-to/Human-Input-on-Execution.md index abb700d310..e2c1d42d27 100644 --- a/docs/how-to/Human-Input-on-Execution.md +++ b/docs/how-to/Human-Input-on-Execution.md @@ -1,5 +1,5 @@ --- -title: Human Input on Execution +title: Human Input on Execution [Release Candidate] description: Comprehensive guide on integrating CrewAI with human input during execution in complex decision-making processes or when needed help during complex tasks. --- @@ -9,7 +9,7 @@ Human input plays a pivotal role in several agent execution scenarios, enabling ## Using Human Input with CrewAI -Incorporating human input with CrewAI is straightforward, enhancing the agent's ability to make informed decisions. While the documentation previously mentioned using a "LangChain Tool" and a specific "DuckDuckGoSearchRun" tool from `langchain_community.tools`, it's important to clarify that the integration of such tools should align with the actual capabilities and configurations defined within your `Agent` class setup. Now it is a simple flag in the task itself that needs to be turned on. +The easiest way to integrate human input into agent execution is by setting the `human_input` flag in the task definition. When this flag is enabled, the agent will prompt the user for input before giving it's final answer. This input can be used to provide additional context, clarify ambiguities, or validate the agent's output. ### Example: