Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build: use std::make_tuple() for compatibility with older GCC versions #3488

Merged
merged 2 commits into from
Oct 5, 2023

Conversation

kenvix
Copy link
Contributor

@kenvix kenvix commented Oct 5, 2023

Older versions of GCC, such as GCC 5.x, do not support the following syntax for constructing tuples, so the user will get the following error:

params.lora_adapter.push_back({std::string(argv[i]), 1.0f});
common/common.cpp:364:58: error: converting to ‘std::vector<std::tuple<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, float> >::value_type {aka std::tuple<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, float>}’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = char*&; _U2 = float; = void; _T1 = std::__cxx11::basic_string; _T2 = float]’ params.lora_adapter.push_back({argv[i], 1.0f});

This commit will use classical way to construct tuples to make it compatible with old GCCs.

@ggerganov ggerganov merged commit 45eba93 into ggerganov:master Oct 5, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 6, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  kv cache slot search improvements (ggerganov#3493)
  prompts : fix editorconfig checks after ggerganov#3416
  parallel : add option to load external prompt file (ggerganov#3416)
  server : reuse llama_sample_token common util (ggerganov#3494)
  llama : correct hparams comparison (ggerganov#3446)
  ci : fix xcodebuild destinations (ggerganov#3491)
  convert : update Falcon script for new HF config (ggerganov#3448)
  build : use std::make_tuple() for compatibility with older GCC versions (ggerganov#3488)
  common : process escape sequences in reverse prompts (ggerganov#3461)
  CLBlast: Fix handling of on-device tensor data
  server : fix incorrect num_tokens_predicted (ggerganov#3480)
  swift : disable ACCELERATE_NEW_LAPACK (ggerganov#3481)
  ci : add swift build via xcodebuild (ggerganov#3482)
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants