Skip to content

Actions: ngxson/llama.cpp

flake8 Lint

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
1,189 workflow runs
1,189 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

server : add support for "encoding_format": "base64" to the */embeddi…
flake8 Lint #1175: Commit 9ba399d pushed by ngxson
December 24, 2024 20:37 20s master
December 24, 2024 20:37 20s
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
flake8 Lint #1174: Commit 2cd43f4 pushed by ngxson
December 24, 2024 17:56 21s master
December 24, 2024 17:56 21s
server: allow filtering llama server response fields (#10940)
flake8 Lint #1173: Commit 09fe2e7 pushed by ngxson
December 24, 2024 16:40 25s master
December 24, 2024 16:40 25s
server : add system_fingerprint to chat/completion (#10917)
flake8 Lint #1172: Commit 485dc01 pushed by ngxson
December 23, 2024 11:03 23s master
December 23, 2024 11:03 23s
llama : support InfiniAI Megrez 3b (#10893)
flake8 Lint #1171: Commit b92a14a pushed by ngxson
December 23, 2024 01:35 21s master
December 23, 2024 01:35 21s
llama : support for Llama-3_1-Nemotron-51B (#10669)
flake8 Lint #1170: Commit 6f0c9e0 pushed by ngxson
December 23, 2024 00:35 21s master
December 23, 2024 00:35 21s
devops : add docker-multi-stage builds (#10832)
flake8 Lint #1169: Commit 7c0e285 pushed by ngxson
December 22, 2024 22:35 4m 57s master
December 22, 2024 22:35 4m 57s
convert : add BertForMaskedLM (#10919)
flake8 Lint #1168: Commit 5cd85b5 pushed by ngxson
December 21, 2024 08:32 24s master
December 21, 2024 08:32 24s
convert : fix RWKV v6 model conversion (#10913)
flake8 Lint #1167: Commit 0a11f8b pushed by ngxson
December 20, 2024 10:25 22s master
December 20, 2024 10:25 22s
ggml: fix arm build with gcc (#10895)
flake8 Lint #1166: Commit a3c33b1 pushed by ngxson
December 19, 2024 13:25 24s master
December 19, 2024 13:25 24s
convert : Add support for Microsoft Phi-4 model (#10817)
flake8 Lint #1165: Commit 7585edb pushed by ngxson
December 19, 2024 10:25 3m 31s master
December 19, 2024 10:25 3m 31s
server : add "tokens" output (#10853)
flake8 Lint #1164: Commit 0e70ba6 pushed by ngxson
December 18, 2024 09:24 25s master
December 18, 2024 09:24 25s
Revert "llama : add Falcon3 support (#10864)" (#10876)
flake8 Lint #1163: Commit 4da69d1 pushed by ngxson
December 18, 2024 01:24 22s master
December 18, 2024 01:24 22s
server : fill usage info in embeddings and rerank responses (#10852)
flake8 Lint #1162: Commit 05c3a44 pushed by ngxson
December 17, 2024 16:23 27s master
December 17, 2024 16:23 27s
llava : Allow locally downloaded models for QwenVL (#10833)
flake8 Lint #1161: Commit 4ddd199 pushed by ngxson
December 15, 2024 21:22 20s master
December 15, 2024 21:22 20s
llama : add Deepseek MoE v1 & GigaChat models (#10827)
flake8 Lint #1160: Commit a097415 pushed by ngxson
December 15, 2024 17:22 26s master
December 15, 2024 17:22 26s
server: Fix has_next_line in JSON response (#10818)
flake8 Lint #1159: Commit 89d604f pushed by ngxson
December 14, 2024 23:21 22s master
December 14, 2024 23:21 22s
llama : add Qwen2VL support + multimodal RoPE (#10361)
flake8 Lint #1158: Commit ba1cb19 pushed by ngxson
December 14, 2024 13:21 27s master
December 14, 2024 13:21 27s
Introducing experimental OpenCL backend with support for Qualcomm Adr…
flake8 Lint #1157: Commit a76c56f pushed by ngxson
December 13, 2024 21:20 24s master
December 13, 2024 21:20 24s
gguf-py : numpy 2 newbyteorder fix (#9772)
flake8 Lint #1156: Commit 4601a8b pushed by ngxson
December 13, 2024 15:20 27s master
December 13, 2024 15:20 27s
imatrix : Add imatrix to --no-context-shift (#10766)
flake8 Lint #1155: Commit ae4b922 pushed by ngxson
December 10, 2024 18:18 7m 38s master
December 10, 2024 18:18 7m 38s
server : fix format_infill (#10724)
flake8 Lint #1154: Commit ce8784b pushed by ngxson
December 8, 2024 22:16 19s master
December 8, 2024 22:16 19s
server : bring back info of final chunk in stream mode (#10722)
flake8 Lint #1153: Commit e52522b pushed by ngxson
December 8, 2024 20:16 23s master
December 8, 2024 20:16 23s
llama : add 128k yarn context for Qwen (#10698)
flake8 Lint #1152: Commit 62e84d9 pushed by ngxson
December 7, 2024 21:16 1m 9s master
December 7, 2024 21:16 1m 9s
server : (refactor) no more json in server_task input (#10691)
flake8 Lint #1151: Commit 3573fa8 pushed by ngxson
December 7, 2024 20:15 21s master
December 7, 2024 20:15 21s