Claude with Copilot Chat is broken #148280
Replies: 5 comments
-
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Completely agree. I am having the exact same problems with claude through copilot chat, and several people that I know are also having the same. |
Beta Was this translation helpful? Give feedback.
-
I am experiencing identical issues with Copilot + Claude 3.5. The responses are truncated after a few lines, even when sending tiny contexts. I am sure that I am not exceeding any acceptable usage limits, as I have not used Copilot since December, and I am sending just a few 400-token requests. This same request works without any issue when sent to the Zed backend (Anthropic API). |
Beta Was this translation helpful? Give feedback.
-
I also experience similar issues as the ones described above with Copilot + Claude 3.5. |
Beta Was this translation helpful? Give feedback.
-
Select Topic Area
Bug
Body
Claude 3.5 Sonnet model used from GitHub Copilot Chat is very broken on both VSCode and Zed Editor. It seems to almost always cut its messages in the half while streaming. It's very unreliable compared to GPT-4o. GPT-4o has more frequent "ticks", GPT-4o keeps streaming more often almost for each token whilst on Claude, it's more like every 15-20 tokens at a time in longer periods. Problem is, I almost always prefer Claude over GPT-4o for its helpfulness, giving actual work instead of giving examples for me so I don't want to use GPT-4o and I need this issue to be fixed.
In my opinion, the source of the problem is not client-sided but it's server-sided. I think there is some bug or misimplementation between Claude 3.5 Sonnet API and Copilot servers. Because, when I am using the Anthropic API directly from the Zed Editor, it works perfectly and similar to GPT-4o, more frequent "ticks".
The issue arises more often as the context size is larger (e.g. 20k).
Beta Was this translation helpful? Give feedback.
All reactions