Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Chat* composable components #5333

Merged
merged 120 commits into from
Oct 11, 2023
Merged

Implement Chat* composable components #5333

merged 120 commits into from
Oct 11, 2023

Conversation

ahuang11
Copy link
Contributor

@ahuang11 ahuang11 commented Jul 27, 2023

https://github.com/ahuang11/panel-chat-examples

Diagram:
image

Overview:

  • ChatMessage is essentially a dataclass, and holds the content of the users data and metadata
  • ChatEntry is the rendering of the ChatMessage (pane?)
  • ChatFeed is the container to hold all the ChatEntry(s) (layout?) with the ability to attach a callback (e.g. AI response)
  • ChatInterface is the highest level interface, composing the ChatFeed and Tabs of Widget(s)

Todo:

@codecov
Copy link

codecov bot commented Jul 27, 2023

Codecov Report

Merging #5333 (b088ed4) into main (497ad69) will increase coverage by 0.43%.
The diff coverage is 94.18%.

@@            Coverage Diff             @@
##             main    #5333      +/-   ##
==========================================
+ Coverage   83.50%   83.93%   +0.43%     
==========================================
  Files         275      278       +3     
  Lines       39549    41128    +1579     
==========================================
+ Hits        33024    34522    +1498     
- Misses       6525     6606      +81     
Flag Coverage Δ
ui-tests 40.16% <23.12%> (-0.65%) ⬇️
unitexamples-tests 74.26% <94.18%> (+0.82%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
panel/layout/grid.py 74.53% <100.00%> (ø)
panel/pane/base.py 85.06% <100.00%> (+0.57%) ⬆️
panel/pane/holoviews.py 79.80% <100.00%> (+0.03%) ⬆️
panel/param.py 85.67% <100.00%> (+0.12%) ⬆️
panel/pipeline.py 88.80% <100.00%> (+0.05%) ⬆️
panel/reactive.py 80.09% <100.00%> (+0.01%) ⬆️
panel/tests/widgets/test_chat.py 100.00% <100.00%> (ø)
panel/widgets/__init__.py 100.00% <100.00%> (ø)
panel/widgets/button.py 87.06% <ø> (ø)
panel/widgets/indicators.py 74.21% <100.00%> (+0.44%) ⬆️
... and 2 more

... and 4 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 2, 2023

I think I have most of the functionality down:

import re
from typing import Any

from panel.io.mime_render import exec_with_return
import pandas as pd
import panel as pn
import openai

DATAFRAME_PROMPT = """
    Here are the columns in your DataFrame: {columns}.
    Create  a plot with hvplot that highlights an interesting
    relationship between the columns with hvplot groupby kwarg.
"""

CODE_REGEX = re.compile(r"```python(.*?)```", re.DOTALL)


async def respond_with_openai(contents: Any):
    # extract the DataFrame
    if isinstance(contents, pd.DataFrame):
        global df
        df = contents
        columns = contents.columns
        message = DATAFRAME_PROMPT.format(columns=columns)
    else:
        message = contents

    # ask OpenAI to plot
    response = await openai.ChatCompletion.acreate(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": message}],
        temperature=0,
        max_tokens=500,
        stream=True,
    )
    message = ""
    async for chunk in response:
        message += chunk["choices"][0]["delta"].get("content", "")
        yield {"user": "OpenAI", "value": message}


async def respond_with_executor(code: str):
    return {
        "user": "Executor",
        "value": exec_with_return(code=code, global_context=globals()),
    }


async def response_callback(
    contents: Any,
    name: str,
    chat_interface: pn.widgets.ChatInterface,
):
    if name == "You":
        async for chunk in respond_with_openai(contents):
            yield chunk
    elif CODE_REGEX.search(contents):
        yield await respond_with_executor(CODE_REGEX.search(contents).group(1))


chat_card = pn.widgets.ChatCard(callback=response_callback)
chat_interface = pn.widgets.ChatInterface(
    value=chat_card, widgets=[pn.widgets.TextInput(), pn.widgets.FileInput()]
)
chat_interface.servable()
Screen.Recording.2023-08-02.at.5.38.54.PM.mov

However, one issue I have no idea how to solve is the flickering; I thought it was because I was using ReactiveHTML, but apparently not.

I also tried using param.update but it didn't stream.

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 2, 2023

It seems like it also happens in the old ChatBox implementation too #5317

Screen.Recording.2023-08-02.at.7.48.24.PM.mov

However, if I downgrade to panel 1.2.0, the flickering disappears.

Screen.Recording.2023-08-02.at.7.51.07.PM.mov

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 3, 2023

Added icons + repeat, undo, clear buttons.

Screen.Recording.2023-08-02.at.9.46.18.PM.mov
import re
from typing import Any

from panel.io.mime_render import exec_with_return
import pandas as pd
import panel as pn
import openai

DATAFRAME_PROMPT = """
    Here are the columns in your DataFrame: {columns}.
    Create  a plot with hvplot that highlights an interesting
    relationship between the columns with hvplot groupby kwarg.
"""

CODE_REGEX = re.compile(r"```python(.*?)```", re.DOTALL)


async def respond_with_openai(contents: Any):
    # extract the DataFrame
    if isinstance(contents, pd.DataFrame):
        global df
        df = contents
        columns = contents.columns
        message = DATAFRAME_PROMPT.format(columns=columns)
    else:
        message = contents

    # ask OpenAI to plot
    response = await openai.ChatCompletion.acreate(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": message}],
        temperature=0,
        max_tokens=500,
        stream=True,
    )
    message = ""
    async for chunk in response:
        message += chunk["choices"][0]["delta"].get("content", "")
        yield {"user": "OpenAI", "value": message}


async def respond_with_executor(code: str):
    return {
        "user": "Executor",
        "value": exec_with_return(code=code, global_context=globals()),
    }


async def response_callback(
    contents: Any,
    name: str,
    chat_interface: pn.widgets.ChatInterface,
):
    print(contents, "PASSED")
    if name == "You":
        async for chunk in respond_with_openai(contents):
            yield chunk
    elif CODE_REGEX.search(contents):
        yield await respond_with_executor(CODE_REGEX.search(contents).group(1))


chat_card = pn.widgets.ChatCard(callback=response_callback)
chat_interface = pn.widgets.ChatInterface(
    value=chat_card, widgets=[pn.widgets.TextInput(), pn.widgets.FileInput()]
)
chat_interface.servable()

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 4, 2023

Some sample langchain code with chat interface

from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.schema import SystemMessage
from langchain.prompts import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
    MessagesPlaceholder,
)
import panel as pn

pn.extension()


async def langchain_callback(contents, user, chat_interface):
    yield await chat_llm_chain.apredict(human_input=contents)

prompt = ChatPromptTemplate.from_messages([
    SystemMessage(content="You are a chatbot having a conversation with a human."),
    MessagesPlaceholder(variable_name="chat_history"),
    HumanMessagePromptTemplate.from_template("{human_input}"),
])
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI()
chat_llm_chain = LLMChain(
    llm=llm,
    prompt=prompt,
    verbose=True,
    memory=memory,
)
chat_feed = pn.widgets.ChatFeed(callback=langchain_callback)
chat_interface = pn.widgets.ChatInterface(value=chat_feed)
chat_interface.servable()
Screen.Recording.2023-08-03.at.9.38.59.PM.mov

Need to figure out how streaming + agent works with this.

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 4, 2023

The latest diagram

image

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 4, 2023

Starting to think that ChatInterface should simply inherit ChatFeed rather than compose it as a value. Feels a tad tedious typing this all out every time I need a callback:

chat_feed = pn.widgets.ChatFeed(callback=callback)
chat_interface = pn.widgets.ChatInterface(value=chat_feed)

The other solution is ChatInterface re-implements the same methods as ChatFeed.

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 8, 2023

LangChain is now supported too!

from langchain.agents import initialize_agent, AgentType, load_tools
from langchain.llms import OpenAI


def callback(contents, name, chat_interface):
    agent.run(contents, callbacks=[PanelCallbackHandler(chat_interface=chat_interface)])
    yield system_entry.clone(value="That was fun, ask me more!")


system_entry = pn.widgets.ChatEntry(user="System", avatar="⚙️")
chat_interface = pn.widgets.ChatInterface(
    value=[system_entry.clone(value="Let's do math!")],
    callback=callback,
)
llm = OpenAI(streaming=True)
tools = load_tools(["pal-math"], llm=llm)
agent = initialize_agent(tools, llm)
pn.template.FastListTemplate(
    main=[chat_interface],
    title="MathGPT"
).servable()
Screen.Recording.2023-08-08.at.5.19.59.PM.mov

**with the following Callback (PR to LangChain repo)?

class PanelCallbackHandler(BaseCallbackHandler):
    def __init__(
        self,
        chat_interface: pn.widgets.ChatInterface,
        user: str = "LangChain",
        avatar: str = "🦜️",
    ):
        self.chat_interface = chat_interface
        self._entry = None
        self._active_user = user
        self._active_avatar = avatar
        self._disabled_state = self.chat_interface.disabled

        self._input_user = user
        self._input_avatar = avatar

    def on_llm_start(self, serialized: Dict[str, Any], *args, **kwargs):
        model = kwargs.get("invocation_params", {}).get("model_name", "")
        if self._active_user and model not in self._active_user:
            self._active_user = f"{self._active_user} ({model})"
        return super().on_llm_start(serialized, *args, **kwargs)

    def on_llm_new_token(self, token: str, **kwargs) -> None:
        self._entry = self.chat_interface.stream(
            token.replace("\n", "<br>"),
            user=self._active_user,
            avatar=self._active_avatar,
            entry=self._entry,
        )
        return super().on_llm_new_token(token, **kwargs)

    def on_llm_end(self, response: LLMResult, *args, **kwargs):
        return super().on_llm_end(response, *args, **kwargs)

    def on_llm_error(self, error: Union[Exception, KeyboardInterrupt], *args, **kwargs):
        return super().on_llm_error(error, *args, **kwargs)

    def on_agent_action(self, action: AgentAction, *args, **kwargs: Any) -> Any:
        return super().on_agent_action(action, *args, **kwargs)

    def on_agent_finish(self, finish: AgentFinish, *args, **kwargs: Any) -> Any:
        return super().on_agent_finish(finish, *args, **kwargs)

    def on_tool_start(
        self, serialized: Dict[str, Any], input_str: str, *args, **kwargs
    ):
        self._active_avatar = "🛠️"
        self._active_user = f"{self._active_user} - {serialized['name']}"
        return super().on_tool_start(serialized, input_str, *args, **kwargs)

    def on_tool_end(self, output, *args, **kwargs):
        self._active_user = self._input_user
        self._active_avatar = self._input_avatar
        return super().on_tool_end(output, *args, **kwargs)

    def on_tool_error(
        self, error: Union[Exception, KeyboardInterrupt], *args, **kwargs
    ):
        return super().on_tool_error(error, *args, **kwargs)

    def on_chain_start(
        self, serialized: Dict[str, Any], inputs: Dict[str, Any], *args, **kwargs
    ):
        self.chat_interface.disabled = True
        return super().on_chain_start(serialized, inputs, *args, **kwargs)

    def on_chain_end(self, outputs: Dict[str, Any], *args, **kwargs):
        self._entry = None
        self.chat_interface.disabled = self._disabled_state
        return super().on_chain_end(outputs, *args, **kwargs)

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 9, 2023

Here's the latest ways to use with OpenAI

no async no stream

import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")


def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": contents}],
    )
    yield response.choices[0]["value"]["content"]

ci = pn.widgets.ChatInterface(callback=callback)
ci.servable()

no async explicit stream (no response)

import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")


def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": contents}],
        stream=True
    )
    entry = None
    for chunk in response:
        value = chunk["choices"][0]["delta"].get("content", "")
        entry = instance.stream(value=value, user="GPT3.5", avatar="🤖", entry=entry)

ci = pn.widgets.ChatInterface(callback=callback)
ci.servable()

(half) async generator stream

import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")


async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": contents}],
        stream=True
    )
    value = ""
    for chunk in response:
        value += chunk["choices"][0]["delta"].get("content", "")
        yield {"value": value, "user": "GPT3.5", "avatar": "🤖"}

ci = pn.widgets.ChatInterface(callback=callback)
ci.servable()

async no stream

import openai
import panel as pn
pn.extension(sizing_mode="stretch_width")


async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
    response = await openai.ChatCompletion.acreate(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": contents}],
    )
    yield response.choices[0]["value"]["content"]

ci = pn.widgets.ChatInterface(callback=callback)
ci.servable()

async generator stream

async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
    response = await openai.ChatCompletion.acreate(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": contents}],
        stream=True
    )
    value = ""
    async for chunk in response:
        value += chunk["choices"][0]["delta"].get("content", "")
        yield {"value": value, "user": "GPT3.5", "avatar": "🤖"}

ci = pn.widgets.ChatInterface(callback=callback)
ci.servable()

@MarcSkovMadsen
Copy link
Collaborator

MarcSkovMadsen commented Aug 9, 2023

How do I panel serve these examples such that the css is used @ahuang11 ?

It looks like below for me

image

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 9, 2023

I think you may need to run panel build panel inside the panel repo dir once.

Once it's released, this will not be necessary.

@MarcSkovMadsen
Copy link
Collaborator

I have run panel build panel and it does not work. Why/ How does it work for you @ahuang11 ?

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 9, 2023

Do you see any console errors?

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 9, 2023

Maybe try running in incognito mode or hard refresh, rerun your panel serve, clear your cache

@MarcSkovMadsen
Copy link
Collaborator

Yes. Its trying to find the unreleased css via the CDN

image

I've tried setting the environment variable BOKEH_RESOURCES=inline. But it did not change anything. Is there some other configuration or environment variable I need to set?

@ahuang11
Copy link
Contributor Author

ahuang11 commented Aug 9, 2023

Have you tried incognito / clearing your cache?

@@ -0,0 +1,1646 @@
"""The chat module provides components for building and using chat interfaces
Copy link
Collaborator

@MarcSkovMadsen MarcSkovMadsen Oct 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I run the automated tests to generate videos of chat_memory.py I can see that the ChatFeed does not scroll to the end. I.e. the user ends up not being able to see what is written. (1600x900px, zoom=1.5)

6facf4c1fa752e60f478ebf366720e4d.webm

test-finished-1

"""
Demonstrates how to use the ChatInterface widget to create a chatbot using
OpenAI's GPT-3 API with LangChain.
"""

import panel as pn
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory

pn.extension(design="material")


async def callback(contents: str, user: str, instance: pn.widgets.ChatInterface):
    await chain.apredict(input=contents)


chat_interface = pn.widgets.ChatInterface(callback=callback, callback_user="ChatGPT")
chat_interface.send(
    "Send a message to get a reply from ChatGPT!", user="System", respond=False
)

callback_handler = pn.widgets.langchain.PanelCallbackHandler(
    chat_interface=chat_interface
)
llm = ChatOpenAI(streaming=True, callbacks=[callback_handler])
memory = ConversationBufferMemory()
chain = ConversationChain(llm=llm, memory=memory)
chat_interface.servable()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe try tweaking auto_scroll_limit?

Copy link
Member

@philippjfr philippjfr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really excellent work @ahuang11! This PR has grown huge and we've iterated a bunch. Overall I think it's in a good state and we can resolve remaining issues in subsequent PRs.

@philippjfr philippjfr merged commit af6262b into main Oct 11, 2023
@philippjfr philippjfr deleted the chat_components branch October 11, 2023 18:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants