You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This seems like a perfect candidate for support in the dev gallery (which is quite excellent btw great job) since it really makes the npu on a copilot+ pc shine.
If you search for DeepSeek-R1-Distill-Qwen-1.5B in dev gallery you get responses but none of them are the NPU optimized version.
It would be great if you could filter the search to only show models that can run on your gpu, or the npu on your system. This is likely the only feature i miss from LM studio
The text was updated successfully, but these errors were encountered:
Thanks @aclinick. We are on track to enable this in the next few weeks when the onnxruntimegenai.qnn version that supports these models is available more broadly.
Microsoft just released deepseek with npu support https://blogs.windows.com/windowsdeveloper/2025/01/29/running-distilled-deepseek-r1-models-locally-on-copilot-pcs-powered-by-windows-copilot-runtime/ and https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
This seems like a perfect candidate for support in the dev gallery (which is quite excellent btw great job) since it really makes the npu on a copilot+ pc shine.
If you search for DeepSeek-R1-Distill-Qwen-1.5B in dev gallery you get responses but none of them are the NPU optimized version.

It would be great if you could filter the search to only show models that can run on your gpu, or the npu on your system. This is likely the only feature i miss from LM studio
The text was updated successfully, but these errors were encountered: