Skip to content

Commit

Permalink
cont : fix mmap flag print (ggml-org#11699)
Browse files Browse the repository at this point in the history
  • Loading branch information
ggerganov committed Feb 8, 2025
1 parent 4d3465c commit bdcf8b6
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 2 deletions.
2 changes: 1 addition & 1 deletion src/llama-model.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1275,7 +1275,7 @@ bool llama_model::load_tensors(llama_model_loader & ml) {

const bool use_mmap_buffer = true;

LLAMA_LOG_INFO("%s: loading model tensors, this can take a while... (mmap = %s)\n", __func__, use_mmap_buffer ? "true" : "false");
LLAMA_LOG_INFO("%s: loading model tensors, this can take a while... (mmap = %s)\n", __func__, ml.use_mmap ? "true" : "false");

// build a list of buffer types for the CPU and GPU devices
pimpl->cpu_buft_list = make_cpu_buft_list(devices);
Expand Down
1 change: 0 additions & 1 deletion src/llama.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -9430,7 +9430,6 @@ static struct llama_model * llama_model_load_from_file_impl(
struct llama_model_params params) {
ggml_time_init();


unsigned cur_percentage = 0;
if (params.progress_callback == NULL) {
params.progress_callback_user_data = &cur_percentage;
Expand Down

0 comments on commit bdcf8b6

Please sign in to comment.