You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When talking with DeepSeek R1 models we get back a thinking block before the actual response from the assistant. It usually looks like this:
<think>
I'be been asked what is the ....
</think>
The capital of France is Paris.
Currently, since this is part of the response text from the assistant, the entire block of text is displayed as-is, including the tags.
We want to implement something similar to DeepSeek's UI:
Screen.Recording.2025-02-17.at.11.29.58.AM.mov
In the future we want to make this step replaceable or extendable, as search agents will display a list of the websites they're navigating, etc.
For now let's make the thinking blocks have it's own UI.
The text was updated successfully, but these errors were encountered:
When talking with DeepSeek R1 models we get back a thinking block before the actual response from the assistant. It usually looks like this:
Currently, since this is part of the response text from the assistant, the entire block of text is displayed as-is, including the tags.
We want to implement something similar to DeepSeek's UI:
Screen.Recording.2025-02-17.at.11.29.58.AM.mov
In the future we want to make this step replaceable or extendable, as search agents will display a list of the websites they're navigating, etc.
For now let's make the thinking blocks have it's own UI.
The text was updated successfully, but these errors were encountered: