Skip to content

Commit

Permalink
Deprecate baichuan & internlm in favor of llama.cpp (#278)
Browse files Browse the repository at this point in the history
  • Loading branch information
li-plus authored Mar 12, 2024
1 parent 4fca4d9 commit 080aa02
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ Support Matrix:
* Platforms: Linux, MacOS, Windows
* Models: [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B), [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3), [CodeGeeX2](https://github.com/THUDM/CodeGeeX2), [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B), [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B), [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B), [Baichuan2](https://github.com/baichuan-inc/Baichuan2), [InternLM](https://github.com/InternLM/InternLM)

**NOTE**: Baichuan & InternLM model series are deprecated in favor of [llama.cpp](https://github.com/ggerganov/llama.cpp).

## Getting Started

**Preparation**
Expand Down
6 changes: 6 additions & 0 deletions chatglm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1764,6 +1764,8 @@ Pipeline::Pipeline(const std::string &path, int max_length) {
// load model
model->load(loader);
} else if (model_type == ModelType::BAICHUAN7B) {
std::cerr << "[WARN] Baichuan models are deprecated in favor of llama.cpp, and will be removed in next major "
"version of chatglm.cpp\n";
CHATGLM_CHECK(version == 1) << "only support version 1 for now but got " << version;

// load config
Expand All @@ -1781,6 +1783,8 @@ Pipeline::Pipeline(const std::string &path, int max_length) {
model = std::make_unique<Baichuan7BForCausalLM>(config);
model->load(loader);
} else if (model_type == ModelType::BAICHUAN13B) {
std::cerr << "[WARN] Baichuan models are deprecated in favor of llama.cpp, and will be removed in next major "
"version of chatglm.cpp\n";
CHATGLM_CHECK(version == 1) << "only support version 1 for now but got " << version;

// load config
Expand All @@ -1798,6 +1802,8 @@ Pipeline::Pipeline(const std::string &path, int max_length) {
model = std::make_unique<Baichuan13BForCausalLM>(config);
model->load(loader);
} else if (model_type == ModelType::INTERNLM) {
std::cerr << "[WARN] InternLM models are deprecated in favor of llama.cpp, and will be removed in next major "
"version of chatglm.cpp\n";
CHATGLM_CHECK(version == 1) << "only support version 1 for now but got " << version;

// load config
Expand Down

0 comments on commit 080aa02

Please sign in to comment.