Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

b3542 #283

Merged
merged 7 commits into from
Aug 7, 2024
Merged

b3542 #283

merged 7 commits into from
Aug 7, 2024

Conversation

Nexesenex
Copy link
Owner

No description provided.

Nexesenex and others added 7 commits August 7, 2024 01:41
This commit updates the usage comment in quantize.cpp to reflect the
new name of the executable, which is llama-quantize.
* Add support for getting cpu info on Windows for llama_bench

* refactor

---------

Co-authored-by: slaren <slarengh@gmail.com>
* Updated device filter to depend on default_selector (fixes non-intel device issues)
* Small related update to example/sycl Readme
* ggml-backend : fix async copy from CPU

* cuda : more reliable async copy, fix stream used when the devices are the same
* make : use C compiler to build metal embed object

* use rm + rmdir to avoid -r flag in rm
@Nexesenex Nexesenex merged commit f56c4b5 into Nexesenex:spacestream Aug 7, 2024
30 of 37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants