Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h #5377

Merged
merged 46 commits into from
Feb 16, 2024
Merged
Show file tree
Hide file tree
Changes from 45 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
d919c6d
Added numa options to allow finer grained control as well as plumbing…
Feb 6, 2024
65792fa
Reverted Makefile
Feb 6, 2024
592e451
Fixed include
Feb 6, 2024
a69d6e2
Removed sched.h from ggml.h, moved ggml_get_numa_affinity into ggml.c…
Feb 6, 2024
60b80b0
removed trailing whitespace
Feb 6, 2024
7aa974d
Added numa options to allow finer grained control as well as plumbing…
Feb 6, 2024
12789eb
Reverting Makefile
Feb 6, 2024
c43808c
Fixed a number of issues with the move from BOOL to ggml_numa_strateg…
Feb 7, 2024
3eccea1
Syncing to pr
Feb 7, 2024
61c37ba
Removing MIRROR_MODE code for this PR
Feb 7, 2024
d47f232
Removing last bit of MIRROR_MODE code for this PR
Feb 7, 2024
783b7ca
Removing unneeded branch in server.cpp example and moving get_numa_af…
Feb 7, 2024
f156112
Merge branch 'ggerganov:master' into master
bmtwl Feb 8, 2024
12c23b6
Fixed lingering init_llama_backend() bool calls in tests and examples
Feb 8, 2024
18fb9a5
Merge branch 'ggerganov:master' into master
bmtwl Feb 8, 2024
90668fb
Merge branch 'ggerganov:master' into master
bmtwl Feb 8, 2024
b65c863
Remote enum llama_numa_strategies
Feb 8, 2024
7bbe511
Revert bad merge with dynatemp flags
Feb 8, 2024
314174d
add missing enum ggml_numa_strategies declaration and revert sync pro…
Feb 8, 2024
c2c3166
add missing enum ggml_numa_strategies declaration
Feb 8, 2024
fecd66a
Merge branch 'ggerganov:master' into master
bmtwl Feb 8, 2024
e107c4c
fixed ggml_init_numa variable
Feb 8, 2024
16b91d1
Merge branch 'master' of https://github.com/bmtwl/llama.cpp
Feb 8, 2024
99a203d
Update ggml.h
bmtwl Feb 8, 2024
6d34ad7
Merge branch 'master' of https://github.com/bmtwl/llama.cpp
Feb 8, 2024
87f8d9e
Merge branch 'ggerganov:master' into master
bmtwl Feb 13, 2024
5a94209
Merge branch 'master' of https://github.com/bmtwl/llama.cpp
Feb 13, 2024
9d42825
Update READMEs with info about numa flags, change INTERLEAVE strategy…
Feb 13, 2024
e37b8f0
Merge branch 'ggerganov:master' into master
bmtwl Feb 13, 2024
0e05042
Merge branch 'ggerganov:master' into master
bmtwl Feb 14, 2024
0fb40ae
split numa init out from llama_backend_init and created llama_numa_in…
Feb 14, 2024
c590bce
Merge branch 'ggerganov:master' into master
bmtwl Feb 14, 2024
a47bb69
Merge branch 'ggerganov:master' into master
bmtwl Feb 14, 2024
7fb5427
Fix up some boolean vs enum comparisons
Feb 14, 2024
e237527
Added #ifdefs for non-Linux OS that don't have cpu_set_t datatype
Feb 14, 2024
dc828c4
Update ggml.h
bmtwl Feb 15, 2024
4ffe18e
Update ggml.c
bmtwl Feb 15, 2024
1585fec
Update ggml.c
bmtwl Feb 15, 2024
c847828
Update examples/server/server.cpp
bmtwl Feb 15, 2024
377b58f
Update common/common.cpp
bmtwl Feb 15, 2024
5de34f5
Merge branch 'ggerganov:master' into master
bmtwl Feb 15, 2024
da65211
unified ggml_numa_strategy enum and fixed text alignment in server.cp…
Feb 15, 2024
7d1f026
Update ggml.c
bmtwl Feb 15, 2024
a5c9a5d
Merge branch 'ggerganov:master' into master
bmtwl Feb 15, 2024
a3cf7bf
removed redundant else from cli argument processing of --numa
Feb 15, 2024
26ea983
whitespace
cebtenzzre Feb 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 15 additions & 5 deletions common/common.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -671,7 +671,15 @@ bool gpt_params_parse_ex(int argc, char ** argv, gpt_params & params) {
} else if (arg == "--no-mmap") {
params.use_mmap = false;
} else if (arg == "--numa") {
params.numa = true;
if (++i >= argc) {
invalid_param = true;
break;
}
std::string value(argv[i]);
/**/ if (value == "distribute" || value == "" ) { params.numa = GGML_NUMA_STRATEGY_DISTRIBUTE; }
else if (value == "isolate") { params.numa = GGML_NUMA_STRATEGY_ISOLATE; }
else if (value == "numactl") { params.numa = GGML_NUMA_STRATEGY_NUMACTL; }
else { invalid_param = true; break; }
} else if (arg == "--verbose-prompt") {
params.verbose_prompt = true;
} else if (arg == "--no-display-prompt") {
Expand Down Expand Up @@ -935,7 +943,7 @@ void gpt_print_usage(int /*argc*/, char ** argv, const gpt_params & params) {
printf(" -tb N, --threads-batch N\n");
printf(" number of threads to use during batch and prompt processing (default: same as --threads)\n");
printf(" -td N, --threads-draft N");
printf(" number of threads to use during generation (default: same as --threads)");
printf(" number of threads to use during generation (default: same as --threads)\n");
printf(" -tbd N, --threads-batch-draft N\n");
printf(" number of threads to use during batch and prompt processing (default: same as --threads-draft)\n");
printf(" -p PROMPT, --prompt PROMPT\n");
Expand Down Expand Up @@ -1005,7 +1013,7 @@ void gpt_print_usage(int /*argc*/, char ** argv, const gpt_params & params) {
printf(" --winogrande-tasks N number of tasks to use when computing the Winogrande score (default: %zu)\n", params.winogrande_tasks);
printf(" --multiple-choice compute multiple choice score over random tasks from datafile supplied with -f\n");
printf(" --multiple-choice-tasks N number of tasks to use when computing the multiple choice score (default: %zu)\n", params.winogrande_tasks);
printf(" --kl-divergence computes KL-divergence to logits provided via --kl-divergence-base");
printf(" --kl-divergence computes KL-divergence to logits provided via --kl-divergence-base\n");
printf(" --keep N number of tokens to keep from the initial prompt (default: %d, -1 = all)\n", params.n_keep);
printf(" --draft N number of tokens to draft for speculative decoding (default: %d)\n", params.n_draft);
printf(" --chunks N max number of chunks to process (default: %d, -1 = all)\n", params.n_chunks);
Expand All @@ -1022,7 +1030,10 @@ void gpt_print_usage(int /*argc*/, char ** argv, const gpt_params & params) {
if (llama_supports_mmap()) {
printf(" --no-mmap do not memory-map model (slower load but may reduce pageouts if not using mlock)\n");
}
printf(" --numa attempt optimizations that help on some NUMA systems\n");
printf(" --numa TYPE attempt optimizations that help on some NUMA systems\n");
printf(" - distribute: spread execution evenly over all nodes\n");
printf(" - isolate: only spawn threads on CPUs on the node that execution started on\n");
printf(" - numactl: use the CPU map provided by numactl\n");
printf(" if run without this previously, it is recommended to drop the system page cache before using this\n");
printf(" see https://github.com/ggerganov/llama.cpp/issues/1437\n");
if (llama_supports_gpu_offload()) {
Expand Down Expand Up @@ -1689,7 +1700,6 @@ void dump_non_result_info_yaml(FILE * stream, const gpt_params & params, const l
fprintf(stream, "no_mmap: %s # default: false\n", !params.use_mmap ? "true" : "false");
fprintf(stream, "no_mul_mat_q: %s # default: false\n", !params.mul_mat_q ? "true" : "false");
fprintf(stream, "no_penalize_nl: %s # default: false\n", !sparams.penalize_nl ? "true" : "false");
fprintf(stream, "numa: %s # default: false\n", params.numa ? "true" : "false");
fprintf(stream, "ppl_output_type: %d # default: 0\n", params.ppl_output_type);
fprintf(stream, "ppl_stride: %d # default: 0\n", params.ppl_stride);
fprintf(stream, "presence_penalty: %f # default: 0.0\n", sparams.penalty_present);
Expand Down
2 changes: 1 addition & 1 deletion common/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ struct gpt_params {
float yarn_beta_slow = 1.0f; // YaRN high correction dim
int32_t yarn_orig_ctx = 0; // YaRN original context length
int32_t rope_scaling_type = LLAMA_ROPE_SCALING_UNSPECIFIED;
ggml_numa_strategy numa = GGML_NUMA_STRATEGY_DISABLED;

// // sampling parameters
struct llama_sampling_params sparams;
Expand Down Expand Up @@ -134,7 +135,6 @@ struct gpt_params {
bool logits_all = false; // return logits for all tokens in the batch
bool use_mmap = true; // use mmap for faster loads
bool use_mlock = false; // use mlock to keep model in memory
bool numa = false; // attempt optimizations that help on some NUMA systems
bool verbose_prompt = false; // print prompt tokens before generation
bool display_prompt = true; // print prompt before generation
bool infill = false; // use infill mode
Expand Down
3 changes: 2 additions & 1 deletion examples/batched-bench/batched-bench.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,8 @@ int main(int argc, char ** argv) {

// init LLM

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

// initialize the model

Expand Down
2 changes: 1 addition & 1 deletion examples/batched.swift/Sources/main.swift
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ let n_parallel: Int = arguments.count > 3 && Int(arguments[3]) != nil ? Int(argu
let n_len: Int = 32

// init LLM
llama_backend_init(false)
llama_backend_init()
defer {
llama_backend_free()
}
Expand Down
3 changes: 2 additions & 1 deletion examples/batched/batched.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ int main(int argc, char ** argv) {

// init LLM

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

// initialize the model

Expand Down
3 changes: 2 additions & 1 deletion examples/beam-search/beam-search.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,8 @@ int main(int argc, char ** argv)
// Init LLM :
//---------------------------------

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model;
llama_context * ctx;
Expand Down
3 changes: 2 additions & 1 deletion examples/embedding/embedding.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,8 @@ int main(int argc, char ** argv) {
params.prompt = gpt_random_prompt(rng);
}

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model;
llama_context * ctx;
Expand Down
3 changes: 2 additions & 1 deletion examples/imatrix/imatrix.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -568,7 +568,8 @@ int main(int argc, char ** argv) {
params.prompt = gpt_random_prompt(rng);
}

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model_params mparams = llama_model_params_from_gpt_params(params);

Expand Down
3 changes: 2 additions & 1 deletion examples/infill/infill.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,8 @@ int main(int argc, char ** argv) {
std::mt19937 rng(params.seed);

LOG("%s: llama backend init\n", __func__);
llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model;
llama_context * ctx;
Expand Down
3 changes: 1 addition & 2 deletions examples/llama-bench/llama-bench.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1151,8 +1151,7 @@ int main(int argc, char ** argv) {
if (!params.verbose) {
llama_log_set(llama_null_log_callback, NULL);
}
bool numa = false;
llama_backend_init(numa);
llama_backend_init();

// initialize printer
std::unique_ptr<printer> p;
Expand Down
4 changes: 2 additions & 2 deletions examples/llama.android/app/src/main/cpp/llama-android.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -274,8 +274,8 @@ Java_com_example_llama_Llm_new_1batch(JNIEnv *, jobject, jint n_tokens, jint emb

extern "C"
JNIEXPORT void JNICALL
Java_com_example_llama_Llm_backend_1init(JNIEnv *, jobject, jboolean numa) {
llama_backend_init(numa);
Java_com_example_llama_Llm_backend_1init(JNIEnv *, jobject) {
llama_backend_init();
}

extern "C"
Expand Down
2 changes: 1 addition & 1 deletion examples/llama.swiftui/llama.cpp.swift/LibLlama.swift
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ actor LlamaContext {
}

static func create_context(path: String) throws -> LlamaContext {
llama_backend_init(false)
llama_backend_init()
var model_params = llama_model_default_params()

#if targetEnvironment(simulator)
Expand Down
3 changes: 2 additions & 1 deletion examples/llava/llava-cli.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,8 @@ static struct llava_context * llava_init(gpt_params * params) {

auto ctx_clip = clip_model_load(clip_path, /*verbosity=*/ 1);

llama_backend_init(params->numa);
llama_backend_init();
llama_numa_init(params->numa);

llama_model_params model_params = llama_model_params_from_gpt_params(*params);

Expand Down
3 changes: 2 additions & 1 deletion examples/lookahead/lookahead.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,8 @@ int main(int argc, char ** argv) {
#endif // LOG_DISABLE_LOGS

// init llama.cpp
llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model = NULL;
llama_context * ctx = NULL;
Expand Down
3 changes: 2 additions & 1 deletion examples/lookup/lookup.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ int main(int argc, char ** argv){
#endif // LOG_DISABLE_LOGS

// init llama.cpp
llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model = NULL;
llama_context * ctx = NULL;
Expand Down
6 changes: 5 additions & 1 deletion examples/main/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,11 @@ These options help improve the performance and memory usage of the LLaMA models.

### NUMA support

- `--numa`: Attempt optimizations that help on some systems with non-uniform memory access. This currently consists of pinning an equal proportion of the threads to the cores on each NUMA node, and disabling prefetch and readahead for mmap. The latter causes mapped pages to be faulted in on first access instead of all at once, and in combination with pinning threads to NUMA nodes, more of the pages end up on the NUMA node where they are used. Note that if the model is already in the system page cache, for example because of a previous run without this option, this will have little effect unless you drop the page cache first. This can be done by rebooting the system or on Linux by writing '3' to '/proc/sys/vm/drop_caches' as root.
- `--numa distribute`: Pin an equal proportion of the threads to the cores on each NUMA node. This will spread the load amongst all cores on the system, utilitizing all memory channels at the expense of potentially requiring memory to travel over the slow links between nodes.
- `--numa isolate`: Pin all threads to the NUMA node that the program starts on. This limits the number of cores and amount of memory that can be used, but guarantees all memory access remains local to the NUMA node.
- `--numa numactl`: Pin threads to the CPUMAP that is passed to the program by starting it with the numactl utility. This is the most flexible mode, and allow arbitraty core usage patterns, for example a map that uses all the cores on one NUMA nodes, and just enough cores on a second node to saturate the inter-node memory bus.

These flags attempt optimizations that help on some systems with non-uniform memory access. This currently consists of one of the above strategies, and disabling prefetch and readahead for mmap. The latter causes mapped pages to be faulted in on first access instead of all at once, and in combination with pinning threads to NUMA nodes, more of the pages end up on the NUMA node where they are used. Note that if the model is already in the system page cache, for example because of a previous run without this option, this will have little effect unless you drop the page cache first. This can be done by rebooting the system or on Linux by writing '3' to '/proc/sys/vm/drop_caches' as root.

### Memory Float 32

Expand Down
3 changes: 2 additions & 1 deletion examples/main/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,8 @@ int main(int argc, char ** argv) {
}

LOG("%s: llama backend init\n", __func__);
llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model;
llama_context * ctx;
Expand Down
3 changes: 2 additions & 1 deletion examples/parallel/parallel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,8 @@ int main(int argc, char ** argv) {
#endif // LOG_DISABLE_LOGS

// init llama.cpp
llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model = NULL;
llama_context * ctx = NULL;
Expand Down
3 changes: 2 additions & 1 deletion examples/passkey/passkey.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,8 @@ int main(int argc, char ** argv) {

// init LLM

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

// initialize the model

Expand Down
3 changes: 2 additions & 1 deletion examples/perplexity/perplexity.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1809,7 +1809,8 @@ int main(int argc, char ** argv) {
params.prompt = gpt_random_prompt(rng);
}

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model;
llama_context * ctx;
Expand Down
2 changes: 1 addition & 1 deletion examples/quantize/quantize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ int main(int argc, char ** argv) {
params.imatrix = &imatrix_data;
}

llama_backend_init(false);
llama_backend_init();

// parse command line arguments
const std::string fname_inp = argv[arg_idx];
Expand Down
7 changes: 7 additions & 0 deletions examples/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,13 @@ Command line options:
- `--memory-f32`: Use 32-bit floats instead of 16-bit floats for memory key+value. Not recommended.
- `--mlock`: Lock the model in memory, preventing it from being swapped out when memory-mapped.
- `--no-mmap`: Do not memory-map the model. By default, models are mapped into memory, which allows the system to load only the necessary parts of the model as needed.
- `--numa STRATEGY`: Attempt one of the below optimization strategies that help on some NUMA systems
- `--numa distribute`: Spread execution evenly over all nodes
- `--numa isolate`: Only spawn threads on CPUs on the node that execution started on
- `--numa numactl`: Use the CPU map provided by numactl
if run without this previously, it is recommended to drop the system page cache before using this
see https://github.com/ggerganov/llama.cpp/issues/1437

- `--numa`: Attempt optimizations that help on some NUMA systems.
- `--lora FNAME`: Apply a LoRA (Low-Rank Adaptation) adapter to the model (implies --no-mmap). This allows you to adapt the pretrained model to specific tasks or domains.
- `--lora-base FNAME`: Optional model to use as a base for the layers modified by the LoRA adapter. This flag is used in conjunction with the `--lora` flag, and specifies the base model for the adaptation.
Expand Down
22 changes: 17 additions & 5 deletions examples/server/server.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1855,7 +1855,10 @@ static void server_print_usage(const char *argv0, const gpt_params &params,
{
printf(" --no-mmap do not memory-map model (slower load but may reduce pageouts if not using mlock)\n");
}
printf(" --numa attempt optimizations that help on some NUMA systems\n");
printf(" --numa TYPE attempt optimizations that help on some NUMA systems\n");
printf(" - distribute: spread execution evenly over all nodes\n");
printf(" - isolate: only spawn threads on CPUs on the node that execution started on\n");
printf(" - numactl: use the CPU map provided my numactl\n");
if (llama_supports_gpu_offload()) {
printf(" -ngl N, --n-gpu-layers N\n");
printf(" number of layers to store in VRAM\n");
Expand Down Expand Up @@ -2264,9 +2267,17 @@ static void server_params_parse(int argc, char **argv, server_params &sparams,
{
params.use_mmap = false;
}
else if (arg == "--numa")
{
params.numa = true;
else if (arg == "--numa") {
if (++i >= argc) {
invalid_param = true;
break;
} else {
std::string value(argv[i]);
/**/ if (value == "distribute" || value == "" ) { params.numa = GGML_NUMA_STRATEGY_DISTRIBUTE; }
else if (value == "isolate") { params.numa = GGML_NUMA_STRATEGY_ISOLATE; }
else if (value == "numactl") { params.numa = GGML_NUMA_STRATEGY_NUMACTL; }
else { invalid_param = true; break; }
}
}
else if (arg == "--embedding")
{
Expand Down Expand Up @@ -2497,7 +2508,8 @@ int main(int argc, char **argv)
params.model_alias = params.model;
}

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

LOG_INFO("build info", {{"build", LLAMA_BUILD_NUMBER},
{"commit", LLAMA_COMMIT}});
Expand Down
3 changes: 2 additions & 1 deletion examples/simple/simple.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ int main(int argc, char ** argv) {

// init LLM

llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

// initialize the model

Expand Down
3 changes: 2 additions & 1 deletion examples/speculative/speculative.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ int main(int argc, char ** argv) {
#endif // LOG_DISABLE_LOGS

// init llama.cpp
llama_backend_init(params.numa);
llama_backend_init();
llama_numa_init(params.numa);

llama_model * model_tgt = NULL;
llama_model * model_dft = NULL;
Expand Down
2 changes: 1 addition & 1 deletion examples/tokenize/tokenize.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ int main(int argc, char ** argv) {

const bool printing_ids = argc > 3 && std::string(argv[3]) == "--ids";

llama_backend_init(false);
llama_backend_init();

llama_model_params model_params = llama_model_default_params();
model_params.vocab_only = true;
Expand Down
Loading