-
-
Notifications
You must be signed in to change notification settings - Fork 531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Chat* composable components #5333
Conversation
Codecov Report
@@ Coverage Diff @@
## main #5333 +/- ##
==========================================
+ Coverage 83.50% 83.93% +0.43%
==========================================
Files 275 278 +3
Lines 39549 41128 +1579
==========================================
+ Hits 33024 34522 +1498
- Misses 6525 6606 +81
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 4 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
c6942e2
to
f8a2847
Compare
I think I have most of the functionality down: import re
from typing import Any
from panel.io.mime_render import exec_with_return
import pandas as pd
import panel as pn
import openai
DATAFRAME_PROMPT = """
Here are the columns in your DataFrame: {columns}.
Create a plot with hvplot that highlights an interesting
relationship between the columns with hvplot groupby kwarg.
"""
CODE_REGEX = re.compile(r"```python(.*?)```", re.DOTALL)
async def respond_with_openai(contents: Any):
# extract the DataFrame
if isinstance(contents, pd.DataFrame):
global df
df = contents
columns = contents.columns
message = DATAFRAME_PROMPT.format(columns=columns)
else:
message = contents
# ask OpenAI to plot
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": message}],
temperature=0,
max_tokens=500,
stream=True,
)
message = ""
async for chunk in response:
message += chunk["choices"][0]["delta"].get("content", "")
yield {"user": "OpenAI", "value": message}
async def respond_with_executor(code: str):
return {
"user": "Executor",
"value": exec_with_return(code=code, global_context=globals()),
}
async def response_callback(
contents: Any,
name: str,
chat_interface: pn.widgets.ChatInterface,
):
if name == "You":
async for chunk in respond_with_openai(contents):
yield chunk
elif CODE_REGEX.search(contents):
yield await respond_with_executor(CODE_REGEX.search(contents).group(1))
chat_card = pn.widgets.ChatCard(callback=response_callback)
chat_interface = pn.widgets.ChatInterface(
value=chat_card, widgets=[pn.widgets.TextInput(), pn.widgets.FileInput()]
)
chat_interface.servable() Screen.Recording.2023-08-02.at.5.38.54.PM.movHowever, one issue I have no idea how to solve is the flickering; I thought it was because I was using ReactiveHTML, but apparently not. I also tried using |
It seems like it also happens in the old ChatBox implementation too #5317 Screen.Recording.2023-08-02.at.7.48.24.PM.movHowever, if I downgrade to panel 1.2.0, the flickering disappears. Screen.Recording.2023-08-02.at.7.51.07.PM.mov |
Added icons + repeat, undo, clear buttons. Screen.Recording.2023-08-02.at.9.46.18.PM.movimport re
from typing import Any
from panel.io.mime_render import exec_with_return
import pandas as pd
import panel as pn
import openai
DATAFRAME_PROMPT = """
Here are the columns in your DataFrame: {columns}.
Create a plot with hvplot that highlights an interesting
relationship between the columns with hvplot groupby kwarg.
"""
CODE_REGEX = re.compile(r"```python(.*?)```", re.DOTALL)
async def respond_with_openai(contents: Any):
# extract the DataFrame
if isinstance(contents, pd.DataFrame):
global df
df = contents
columns = contents.columns
message = DATAFRAME_PROMPT.format(columns=columns)
else:
message = contents
# ask OpenAI to plot
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": message}],
temperature=0,
max_tokens=500,
stream=True,
)
message = ""
async for chunk in response:
message += chunk["choices"][0]["delta"].get("content", "")
yield {"user": "OpenAI", "value": message}
async def respond_with_executor(code: str):
return {
"user": "Executor",
"value": exec_with_return(code=code, global_context=globals()),
}
async def response_callback(
contents: Any,
name: str,
chat_interface: pn.widgets.ChatInterface,
):
print(contents, "PASSED")
if name == "You":
async for chunk in respond_with_openai(contents):
yield chunk
elif CODE_REGEX.search(contents):
yield await respond_with_executor(CODE_REGEX.search(contents).group(1))
chat_card = pn.widgets.ChatCard(callback=response_callback)
chat_interface = pn.widgets.ChatInterface(
value=chat_card, widgets=[pn.widgets.TextInput(), pn.widgets.FileInput()]
)
chat_interface.servable() |
8a8439a
to
9672f82
Compare
Some sample langchain code with chat interface from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.schema import SystemMessage
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
import panel as pn
pn.extension()
async def langchain_callback(contents, user, chat_interface):
yield await chat_llm_chain.apredict(human_input=contents)
prompt = ChatPromptTemplate.from_messages([
SystemMessage(content="You are a chatbot having a conversation with a human."),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{human_input}"),
])
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI()
chat_llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=memory,
)
chat_feed = pn.widgets.ChatFeed(callback=langchain_callback)
chat_interface = pn.widgets.ChatInterface(value=chat_feed)
chat_interface.servable() Screen.Recording.2023-08-03.at.9.38.59.PM.movNeed to figure out how streaming + agent works with this. |
Starting to think that ChatInterface should simply inherit ChatFeed rather than compose it as a value. Feels a tad tedious typing this all out every time I need a callback: chat_feed = pn.widgets.ChatFeed(callback=callback)
chat_interface = pn.widgets.ChatInterface(value=chat_feed) The other solution is ChatInterface re-implements the same methods as ChatFeed. |
e0daca0
to
6f3e1e9
Compare
LangChain is now supported too! from langchain.agents import initialize_agent, AgentType, load_tools
from langchain.llms import OpenAI
def callback(contents, name, chat_interface):
agent.run(contents, callbacks=[PanelCallbackHandler(chat_interface=chat_interface)])
yield system_entry.clone(value="That was fun, ask me more!")
system_entry = pn.widgets.ChatEntry(user="System", avatar="⚙️")
chat_interface = pn.widgets.ChatInterface(
value=[system_entry.clone(value="Let's do math!")],
callback=callback,
)
llm = OpenAI(streaming=True)
tools = load_tools(["pal-math"], llm=llm)
agent = initialize_agent(tools, llm)
pn.template.FastListTemplate(
main=[chat_interface],
title="MathGPT"
).servable() Screen.Recording.2023-08-08.at.5.19.59.PM.mov**with the following Callback (PR to LangChain repo)? class PanelCallbackHandler(BaseCallbackHandler):
def __init__(
self,
chat_interface: pn.widgets.ChatInterface,
user: str = "LangChain",
avatar: str = "🦜️",
):
self.chat_interface = chat_interface
self._entry = None
self._active_user = user
self._active_avatar = avatar
self._disabled_state = self.chat_interface.disabled
self._input_user = user
self._input_avatar = avatar
def on_llm_start(self, serialized: Dict[str, Any], *args, **kwargs):
model = kwargs.get("invocation_params", {}).get("model_name", "")
if self._active_user and model not in self._active_user:
self._active_user = f"{self._active_user} ({model})"
return super().on_llm_start(serialized, *args, **kwargs)
def on_llm_new_token(self, token: str, **kwargs) -> None:
self._entry = self.chat_interface.stream(
token.replace("\n", "<br>"),
user=self._active_user,
avatar=self._active_avatar,
entry=self._entry,
)
return super().on_llm_new_token(token, **kwargs)
def on_llm_end(self, response: LLMResult, *args, **kwargs):
return super().on_llm_end(response, *args, **kwargs)
def on_llm_error(self, error: Union[Exception, KeyboardInterrupt], *args, **kwargs):
return super().on_llm_error(error, *args, **kwargs)
def on_agent_action(self, action: AgentAction, *args, **kwargs: Any) -> Any:
return super().on_agent_action(action, *args, **kwargs)
def on_agent_finish(self, finish: AgentFinish, *args, **kwargs: Any) -> Any:
return super().on_agent_finish(finish, *args, **kwargs)
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, *args, **kwargs
):
self._active_avatar = "🛠️"
self._active_user = f"{self._active_user} - {serialized['name']}"
return super().on_tool_start(serialized, input_str, *args, **kwargs)
def on_tool_end(self, output, *args, **kwargs):
self._active_user = self._input_user
self._active_avatar = self._input_avatar
return super().on_tool_end(output, *args, **kwargs)
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], *args, **kwargs
):
return super().on_tool_error(error, *args, **kwargs)
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], *args, **kwargs
):
self.chat_interface.disabled = True
return super().on_chain_start(serialized, inputs, *args, **kwargs)
def on_chain_end(self, outputs: Dict[str, Any], *args, **kwargs):
self._entry = None
self.chat_interface.disabled = self._disabled_state
return super().on_chain_end(outputs, *args, **kwargs) |
Here's the latest ways to use with OpenAI no async no stream import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")
def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
)
yield response.choices[0]["value"]["content"]
ci = pn.widgets.ChatInterface(callback=callback)
ci.servable() no async explicit stream (no response) import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")
def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
stream=True
)
entry = None
for chunk in response:
value = chunk["choices"][0]["delta"].get("content", "")
entry = instance.stream(value=value, user="GPT3.5", avatar="🤖", entry=entry)
ci = pn.widgets.ChatInterface(callback=callback)
ci.servable() (half) async generator stream import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")
async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
stream=True
)
value = ""
for chunk in response:
value += chunk["choices"][0]["delta"].get("content", "")
yield {"value": value, "user": "GPT3.5", "avatar": "🤖"}
ci = pn.widgets.ChatInterface(callback=callback)
ci.servable() async no stream import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")
async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
)
yield response.choices[0]["value"]["content"]
ci = pn.widgets.ChatInterface(callback=callback)
ci.servable() async generator stream async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
stream=True
)
value = ""
async for chunk in response:
value += chunk["choices"][0]["delta"].get("content", "")
yield {"value": value, "user": "GPT3.5", "avatar": "🤖"}
ci = pn.widgets.ChatInterface(callback=callback)
ci.servable() |
How do I It looks like below for me |
I think you may need to run Once it's released, this will not be necessary. |
I have run |
Do you see any console errors? |
Maybe try running in incognito mode or hard refresh, rerun your panel serve, clear your cache |
Have you tried incognito / clearing your cache? |
Co-authored-by: Philipp Rudiger <prudiger@anaconda.com>
e935f8b
to
f9a2198
Compare
@@ -0,0 +1,1646 @@ | |||
"""The chat module provides components for building and using chat interfaces |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I run the automated tests to generate videos of chat_memory.py
I can see that the ChatFeed
does not scroll to the end. I.e. the user ends up not being able to see what is written. (1600x900px, zoom=1.5)
6facf4c1fa752e60f478ebf366720e4d.webm
"""
Demonstrates how to use the ChatInterface widget to create a chatbot using
OpenAI's GPT-3 API with LangChain.
"""
import panel as pn
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
pn.extension(design="material")
async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
await chain.apredict(input=contents)
chat_interface = pn.widgets.ChatInterface(callback=callback, callback_user="ChatGPT")
chat_interface.send(
"Send a message to get a reply from ChatGPT!", user="System", respond=False
)
callback_handler = pn.widgets.langchain.PanelCallbackHandler(
chat_interface=chat_interface
)
llm = ChatOpenAI(streaming=True, callbacks=[callback_handler])
memory = ConversationBufferMemory()
chain = ConversationChain(llm=llm, memory=memory)
chat_interface.servable()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe try tweaking auto_scroll_limit
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really excellent work @ahuang11! This PR has grown huge and we've iterated a bunch. Overall I think it's in a good state and we can resolve remaining issues in subsequent PRs.
https://github.com/ahuang11/panel-chat-examples
Diagram:

Overview:
ChatMessage
is essentially a dataclass, and holds the content of the users data and metadataChatEntry
is the rendering of theChatMessage
(pane?)ChatFeed
is the container to hold all theChatEntry
(s) (layout?) with the ability to attach a callback (e.g. AI response)ChatInterface
is the highest level interface, composing theChatFeed
andTabs
ofWidget
(s)Todo: