Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[USER] It seems that Metal is not working #3423

Closed
4 tasks done
SignalRT opened this issue Oct 1, 2023 · 8 comments
Closed
4 tasks done

[USER] It seems that Metal is not working #3423

SignalRT opened this issue Oct 1, 2023 · 8 comments
Labels
macos Issues specific to macOS

Comments

@SignalRT
Copy link

SignalRT commented Oct 1, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Load gguf models with llama.cpp.

Current Behavior

It fails initializing metal.

./main -m /Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf --repeat_penalty 1.0 --color -i -r "User:" -f ../../prompts/chat-with-bob.txt
Log start
..................................................
llama_new_context_with_model: kv self size = 400,00 MB
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model '/Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf'
main: error: unable to load model

Environment and Context

MacBook Pro M2 MAX

It works if I add -ngl 0 to disable metal.

Failure Information (for bugs)

./main -m /Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf --repeat_penalty 1.0 --color -i -r "User:" -f ../../prompts/chat-with-bob.txt
Log start
main: build = 1298 (40e07a6)
main: built with Apple clang version 15.0.0 (clang-1500.0.40.1) for arm64-apple-darwin23.1.0
main: seed = 1696146954
llama_model_loader: loaded meta data with 19 key-value pairs and 363 tensors from /Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor 0: token_embd.weight q4_0 [ 5120, 32000, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 3: blk.0.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 6: blk.0.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 7: blk.0.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 8: blk.0.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 9: blk.0.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 10: blk.1.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 11: blk.1.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 13: blk.1.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 14: blk.1.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 15: blk.1.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 16: blk.1.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 17: blk.1.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 18: blk.1.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 19: blk.10.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 20: blk.10.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 21: blk.10.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 22: blk.10.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 23: blk.10.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 24: blk.10.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 25: blk.10.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 26: blk.10.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 27: blk.10.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 28: blk.11.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 29: blk.11.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 30: blk.11.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 31: blk.11.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 32: blk.11.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 33: blk.11.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 34: blk.11.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 35: blk.11.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 36: blk.11.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 37: blk.12.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 38: blk.12.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 39: blk.12.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 40: blk.12.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 41: blk.12.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 42: blk.12.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 43: blk.12.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 44: blk.12.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 45: blk.12.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 46: blk.13.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 47: blk.13.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 48: blk.13.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 49: blk.13.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 50: blk.13.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 51: blk.13.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 52: blk.13.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 53: blk.13.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 54: blk.13.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 55: blk.14.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 56: blk.14.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 57: blk.14.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 58: blk.14.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 59: blk.14.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 60: blk.14.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 61: blk.14.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 62: blk.14.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 63: blk.14.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 64: blk.15.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 65: blk.15.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 66: blk.2.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 67: blk.2.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 68: blk.2.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 69: blk.2.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 70: blk.2.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 71: blk.2.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 72: blk.2.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 73: blk.2.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 74: blk.2.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 75: blk.3.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 76: blk.3.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 77: blk.3.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 78: blk.3.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 79: blk.3.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 80: blk.3.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 81: blk.3.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 82: blk.3.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 83: blk.3.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 84: blk.4.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 85: blk.4.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 86: blk.4.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 87: blk.4.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 88: blk.4.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 89: blk.4.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 90: blk.4.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 91: blk.4.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 92: blk.4.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 93: blk.5.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 94: blk.5.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 95: blk.5.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 96: blk.5.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 97: blk.5.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 98: blk.5.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 99: blk.5.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 100: blk.5.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 101: blk.5.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 102: blk.6.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 103: blk.6.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 104: blk.6.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 105: blk.6.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 106: blk.6.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 107: blk.6.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 108: blk.6.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 109: blk.6.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 110: blk.6.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 111: blk.7.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 112: blk.7.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 113: blk.7.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 114: blk.7.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 115: blk.7.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 116: blk.7.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 117: blk.7.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 118: blk.7.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 119: blk.7.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 120: blk.8.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 121: blk.8.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 122: blk.8.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 123: blk.8.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 124: blk.8.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 125: blk.8.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 126: blk.8.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 127: blk.8.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 128: blk.8.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 129: blk.9.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 130: blk.9.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 131: blk.9.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 132: blk.9.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 133: blk.9.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 134: blk.9.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 135: blk.9.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 136: blk.9.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 137: blk.9.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 138: blk.15.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 139: blk.15.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 140: blk.15.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 141: blk.15.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 142: blk.15.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 143: blk.15.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 144: blk.15.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 145: blk.16.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 146: blk.16.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 147: blk.16.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 148: blk.16.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 149: blk.16.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 150: blk.16.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 151: blk.16.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 152: blk.16.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 153: blk.16.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 154: blk.17.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 155: blk.17.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 156: blk.17.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 157: blk.17.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 158: blk.17.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 159: blk.17.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 160: blk.17.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 161: blk.17.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 162: blk.17.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 163: blk.18.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 164: blk.18.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 165: blk.18.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 166: blk.18.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 167: blk.18.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 168: blk.18.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 169: blk.18.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 170: blk.18.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 171: blk.18.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 172: blk.19.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 173: blk.19.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 174: blk.19.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 175: blk.19.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 176: blk.19.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 177: blk.19.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 178: blk.19.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 179: blk.19.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 180: blk.19.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 181: blk.20.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 182: blk.20.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 183: blk.20.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 184: blk.20.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 185: blk.20.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 186: blk.20.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 187: blk.20.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 188: blk.20.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 189: blk.20.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 190: blk.21.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 191: blk.21.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 192: blk.21.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 193: blk.21.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 194: blk.21.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 195: blk.21.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 196: blk.21.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 197: blk.21.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 198: blk.21.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 199: blk.22.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 200: blk.22.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 201: blk.22.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 202: blk.22.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 203: blk.22.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 204: blk.22.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 205: blk.22.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 206: blk.22.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 207: blk.22.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 208: blk.23.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 209: blk.23.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 210: blk.23.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 211: blk.23.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 212: blk.23.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 213: blk.23.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 214: blk.23.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 215: blk.23.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 216: blk.23.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 217: blk.24.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 218: blk.24.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 219: blk.24.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 220: blk.24.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 221: blk.24.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 222: blk.24.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 223: blk.24.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 224: blk.24.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 225: blk.24.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 226: blk.25.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 227: blk.25.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 228: blk.25.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 229: blk.25.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 230: blk.25.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 231: blk.25.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 232: blk.25.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 233: blk.25.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 234: blk.25.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 235: blk.26.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 236: blk.26.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 237: blk.26.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 238: blk.26.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 239: blk.26.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 240: blk.26.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 241: blk.26.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 242: blk.26.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 243: blk.26.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 244: blk.27.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 245: blk.27.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 246: blk.27.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 247: blk.27.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 248: blk.27.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 249: blk.27.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 250: blk.27.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 251: blk.27.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 252: blk.27.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 253: blk.28.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 254: blk.28.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 255: blk.28.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 256: blk.28.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 257: blk.28.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 258: blk.28.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 259: blk.28.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 260: blk.28.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 261: blk.28.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 262: blk.29.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 263: blk.29.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 264: blk.29.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 265: blk.29.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 266: blk.29.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 267: blk.29.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 268: blk.29.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 269: blk.29.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 270: blk.29.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 271: blk.30.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 272: blk.30.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 273: blk.30.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 274: blk.30.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 275: blk.30.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 276: blk.30.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 277: output.weight q6_K [ 5120, 32000, 1, 1 ]
llama_model_loader: - tensor 278: blk.30.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 279: blk.30.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 280: blk.30.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 281: blk.31.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 282: blk.31.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 283: blk.31.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 284: blk.31.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 285: blk.31.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 286: blk.31.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 287: blk.31.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 288: blk.31.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 289: blk.31.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 290: blk.32.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 291: blk.32.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 292: blk.32.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 293: blk.32.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 294: blk.32.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 295: blk.32.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 296: blk.32.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 297: blk.32.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 298: blk.32.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 299: blk.33.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 300: blk.33.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 301: blk.33.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 302: blk.33.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 303: blk.33.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 304: blk.33.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 305: blk.33.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 306: blk.33.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 307: blk.33.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 308: blk.34.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 309: blk.34.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 310: blk.34.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 311: blk.34.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 312: blk.34.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 313: blk.34.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 314: blk.34.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 315: blk.34.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 316: blk.34.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 317: blk.35.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 318: blk.35.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 319: blk.35.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 320: blk.35.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 321: blk.35.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 322: blk.35.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 323: blk.35.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 324: blk.35.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 325: blk.35.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 326: blk.36.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 327: blk.36.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 328: blk.36.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 329: blk.36.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 330: blk.36.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 331: blk.36.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 332: blk.36.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 333: blk.36.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 334: blk.36.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 335: blk.37.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 336: blk.37.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 337: blk.37.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 338: blk.37.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 339: blk.37.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 340: blk.37.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 341: blk.37.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 342: blk.37.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 343: blk.37.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 344: blk.38.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 345: blk.38.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 346: blk.38.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 347: blk.38.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 348: blk.38.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 349: blk.38.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 350: blk.38.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 351: blk.38.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 352: blk.38.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 353: blk.39.attn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 354: blk.39.ffn_down.weight q4_0 [ 13824, 5120, 1, 1 ]
llama_model_loader: - tensor 355: blk.39.ffn_gate.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 356: blk.39.ffn_up.weight q4_0 [ 5120, 13824, 1, 1 ]
llama_model_loader: - tensor 357: blk.39.ffn_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - tensor 358: blk.39.attn_k.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 359: blk.39.attn_output.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 360: blk.39.attn_q.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 361: blk.39.attn_v.weight q4_0 [ 5120, 5120, 1, 1 ]
llama_model_loader: - tensor 362: output_norm.weight f32 [ 5120, 1, 1, 1 ]
llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: llama.context_length u32
llama_model_loader: - kv 3: llama.embedding_length u32
llama_model_loader: - kv 4: llama.block_count u32
llama_model_loader: - kv 5: llama.feed_forward_length u32
llama_model_loader: - kv 6: llama.rope.dimension_count u32
llama_model_loader: - kv 7: llama.attention.head_count u32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv 10: general.file_type u32
llama_model_loader: - kv 11: tokenizer.ggml.model str
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr
llama_model_loader: - kv 13: tokenizer.ggml.scores arr
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32
llama_model_loader: - kv 18: general.quantization_version u32
llama_model_loader: - type f32: 81 tensors
llama_model_loader: - type q4_0: 281 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_print_meta: format = GGUF V2 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 5120
llm_load_print_meta: n_head = 40
llm_load_print_meta: n_head_kv = 40
llm_load_print_meta: n_layer = 40
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: f_norm_eps = 0,0e+00
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: n_ff = 13824
llm_load_print_meta: freq_base_train = 10000,0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: model type = 13B
llm_load_print_meta: model ftype = mostly Q4_0
llm_load_print_meta: model params = 13,02 B
llm_load_print_meta: model size = 6,86 GiB (4,53 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0,12 MB
llm_load_tensors: mem required = 7024,01 MB
...................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000,0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 400,00 MB
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model '/Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf'
main: error: unable to load model

Steps to Reproduce

Execute any gguf model:
./main -m /Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf --repeat_penalty 1.0 --color -i -r "User:" -f ../../prompts/chat-with-bob.txt

Failure Logs

Same as above

@SignalRT SignalRT changed the title [BUG] It seems that Metal is not working [USER] It seems that Metal is not working Oct 1, 2023
@staviq staviq added the macos Issues specific to macOS label Oct 1, 2023
@ggerganov
Copy link
Owner

@SignalRT

Can you update to latest master (specifically after merging #3427 ) and retry?
Post the new log

@SignalRT
Copy link
Author

SignalRT commented Oct 2, 2023

@ggerganov

I think it's my fault.....

I usually compile with BUILD_SHARED_LIBS=ON. In this case the it doesn´t find ggml-metal.metal

llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000,0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 400,00 MB
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M2 Max
ggml_metal_init: picking default device: Apple M2 Max
ggml_metal_init: loading '(null)'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=258 "The file name is invalid."
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model /Users/jlsantiago/Documents/Models/llama2/llama-2-13b-chat.Q4_0.gguf'
main: error: unable to load model

If I compile this without BUILD_SHARED_LIBS=ON it works.

@SignalRT SignalRT closed this as completed Oct 2, 2023
@petterik
Copy link

petterik commented Oct 3, 2023

@ggerganov FWIW, I still get ggml_metal_init() failed with my Macbook Pro 15" 2017.

Here's the full output from master ff5a3f:

$ ./main -m models/llama-2-7b.Q4_K_M.gguf -n -1 -p "Test:"                                      [12:53:00]
Log start
main: build = 1311 (ff5a3f0)
main: built with Apple clang version 15.0.0 (clang-1500.0.40.1) for x86_64-apple-darwin22.6.0
main: seed  = 1696330382
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from models/llama-2-7b.Q4_K_M.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor    0:                token_embd.weight q4_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    2:            blk.0.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor    3:            blk.0.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor    4:              blk.0.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor    5:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor    6:              blk.0.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    7:         blk.0.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    8:              blk.0.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor    9:              blk.0.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   10:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   11:            blk.1.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   12:            blk.1.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   13:              blk.1.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   14:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   15:              blk.1.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   16:         blk.1.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   17:              blk.1.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   18:              blk.1.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   19:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   20:           blk.10.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   21:           blk.10.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   22:             blk.10.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   23:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   24:             blk.10.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   25:        blk.10.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   26:             blk.10.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   27:             blk.10.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   28:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   29:           blk.11.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   30:           blk.11.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   31:             blk.11.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   32:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   33:             blk.11.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   34:        blk.11.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   35:             blk.11.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   36:             blk.11.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   37:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   38:           blk.12.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   39:           blk.12.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   40:             blk.12.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   41:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   42:             blk.12.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   43:        blk.12.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   44:             blk.12.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   45:             blk.12.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   46:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   47:           blk.13.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   48:           blk.13.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   49:             blk.13.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   50:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   51:             blk.13.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   52:        blk.13.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   53:             blk.13.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   54:             blk.13.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   55:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   56:           blk.14.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   57:           blk.14.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   58:             blk.14.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   59:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   60:             blk.14.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   61:        blk.14.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   62:             blk.14.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   63:             blk.14.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   64:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   65:           blk.15.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   66:           blk.15.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   67:             blk.15.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   68:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   69:             blk.15.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   70:        blk.15.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   71:             blk.15.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   72:             blk.15.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   73:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   74:           blk.16.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   75:           blk.16.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   76:             blk.16.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   77:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   78:             blk.16.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   79:        blk.16.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   80:             blk.16.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   81:             blk.16.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   82:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   83:           blk.17.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   84:           blk.17.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   85:             blk.17.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   86:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   87:             blk.17.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   88:        blk.17.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   89:             blk.17.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   90:             blk.17.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   91:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   92:           blk.18.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor   93:           blk.18.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   94:             blk.18.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor   95:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor   96:             blk.18.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   97:        blk.18.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   98:             blk.18.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor   99:             blk.18.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  100:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  101:           blk.19.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  102:           blk.19.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  103:             blk.19.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  104:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  105:             blk.19.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  106:        blk.19.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  107:             blk.19.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  108:             blk.19.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  109:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  110:            blk.2.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  111:            blk.2.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  112:              blk.2.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  113:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  114:              blk.2.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  115:         blk.2.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  116:              blk.2.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  117:              blk.2.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  118:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  119:           blk.20.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  120:           blk.20.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  121:             blk.20.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  122:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  123:             blk.20.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  124:        blk.20.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  125:             blk.20.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  126:             blk.20.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  127:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  128:           blk.21.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  129:           blk.21.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  130:             blk.21.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  131:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  132:             blk.21.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  133:        blk.21.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  134:             blk.21.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  135:             blk.21.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  136:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  137:           blk.22.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  138:           blk.22.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  139:             blk.22.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  140:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  141:             blk.22.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  142:        blk.22.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  143:             blk.22.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  144:             blk.22.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  145:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  146:           blk.23.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  147:           blk.23.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  148:             blk.23.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  149:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  150:             blk.23.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  151:        blk.23.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  152:             blk.23.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  153:             blk.23.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  154:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  155:            blk.3.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  156:            blk.3.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  157:              blk.3.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  158:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  159:              blk.3.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  160:         blk.3.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  161:              blk.3.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  162:              blk.3.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  163:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  164:            blk.4.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  165:            blk.4.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  166:              blk.4.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  167:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  168:              blk.4.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  169:         blk.4.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  170:              blk.4.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  171:              blk.4.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  172:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  173:            blk.5.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  174:            blk.5.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  175:              blk.5.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  176:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  177:              blk.5.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  178:         blk.5.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  179:              blk.5.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  180:              blk.5.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  181:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  182:            blk.6.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  183:            blk.6.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  184:              blk.6.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  185:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  186:              blk.6.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  187:         blk.6.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  188:              blk.6.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  189:              blk.6.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  190:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  191:            blk.7.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  192:            blk.7.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  193:              blk.7.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  194:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  195:              blk.7.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  196:         blk.7.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  197:              blk.7.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  198:              blk.7.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  199:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  200:            blk.8.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  201:            blk.8.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  202:              blk.8.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  203:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  204:              blk.8.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  205:         blk.8.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  206:              blk.8.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  207:              blk.8.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  208:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  209:            blk.9.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  210:            blk.9.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  211:              blk.9.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  212:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  213:              blk.9.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  214:         blk.9.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  215:              blk.9.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  216:              blk.9.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  217:                    output.weight q6_K     [  4096, 32000,     1,     1 ]
llama_model_loader: - tensor  218:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  219:           blk.24.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  220:           blk.24.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  221:             blk.24.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  222:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  223:             blk.24.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  224:        blk.24.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  225:             blk.24.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  226:             blk.24.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  227:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  228:           blk.25.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  229:           blk.25.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  230:             blk.25.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  231:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  232:             blk.25.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  233:        blk.25.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  234:             blk.25.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  235:             blk.25.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  236:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  237:           blk.26.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  238:           blk.26.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  239:             blk.26.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  240:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  241:             blk.26.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  242:        blk.26.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  243:             blk.26.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  244:             blk.26.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  245:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  246:           blk.27.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  247:           blk.27.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  248:             blk.27.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  249:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  250:             blk.27.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  251:        blk.27.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  252:             blk.27.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  253:             blk.27.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  254:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  255:           blk.28.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  256:           blk.28.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  257:             blk.28.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  258:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  259:             blk.28.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  260:        blk.28.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  261:             blk.28.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  262:             blk.28.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  263:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  264:           blk.29.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  265:           blk.29.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  266:             blk.29.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  267:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  268:             blk.29.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  269:        blk.29.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  270:             blk.29.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  271:             blk.29.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  272:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  273:           blk.30.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  274:           blk.30.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  275:             blk.30.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  276:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  277:             blk.30.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  278:        blk.30.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  279:             blk.30.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  280:             blk.30.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  281:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  282:           blk.31.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
llama_model_loader: - tensor  283:           blk.31.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  284:             blk.31.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
llama_model_loader: - tensor  285:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - tensor  286:             blk.31.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  287:        blk.31.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  288:             blk.31.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  289:             blk.31.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]
llama_model_loader: - kv   0:                       general.architecture str
llama_model_loader: - kv   1:                               general.name str
llama_model_loader: - kv   2:                       llama.context_length u32
llama_model_loader: - kv   3:                     llama.embedding_length u32
llama_model_loader: - kv   4:                          llama.block_count u32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32
llama_model_loader: - kv   7:                 llama.attention.head_count u32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv  10:                          general.file_type u32
llama_model_loader: - kv  11:                       tokenizer.ggml.model str
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32
llama_model_loader: - kv  18:               general.quantization_version u32
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_print_meta: format           = GGUF V2 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = mostly Q4_K - Medium
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.80 GiB (4.84 BPW)
llm_load_print_meta: general.name   = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.09 MB
llm_load_tensors: mem required  = 3891.34 MB
..................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size  =  256.00 MB
ggml_metal_init: allocating
ggml_metal_init: found device: AMD Radeon Pro 560
ggml_metal_init: found device: Intel(R) HD Graphics 630
ggml_metal_init: picking default device: AMD Radeon Pro 560
ggml_metal_init: loading '/Users/petter/Github/ggerganov/llama.cpp/ggml-metal.metal'
ggml_metal_init: loaded kernel_add                         0x7fa145806fe0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_add_row                     0x7fa1458077e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul                         0x7fa145807fe0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_row                     0x7fa1458087e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_scale                       0x7fa145808fe0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_silu                        0x7fa1458097e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_relu                        0x7fa145809fe0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_gelu                        0x7fa14580a7e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_soft_max                    0x7fa145b044f0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_soft_max_4                  0x7fa145b04e10 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_diag_mask_inf               0x7fa145b05610 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_diag_mask_inf_8             0x7fa136005430 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_f32                0x7fa136005dd0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_f16                0x7fa1360065d0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q4_0               0x7fa14580afe0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q4_1               0x7fa14580b7e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q8_0               0x7fa14580bfe0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q2_K               0x7fa14580c7e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q3_K               0x7fa14580d160 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q4_K               0x7fa14580dbf0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q5_K               0x7fa14580e3f0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_get_rows_q6_K               0x7fa14580ebf0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_rms_norm                    0x7fa14580f3f0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_norm                        0x7fa14580fbf0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_f32_f32             0x7fa1458103f0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_f16_f32             0x7fa145810bf0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_f16_f32_1row        0x7fa1458113f0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_f16_f32_l4          0x7fa145811d70 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32            0x7fa145812570 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32            0x7fa145812d70 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q8_0_f32            0x7fa145813570 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q2_K_f32            0x7fa145813d70 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q3_K_f32            0x7fa1458143d0 | th_max =  512 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q4_K_f32            0x7fa145814bd0 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32            0x7fa145904ed0 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32            0x7fa1459059e0 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mm_f32_f32                         0x0 | th_max =    0 | th_width =    0
(null): error: load pipeline error:
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model 'models/llama-2-7b.Q4_K_M.gguf'
main: error: unable to load model

I noticed that there was no error message, so investigated and it seems that we should be calling [error localizedDescription] instead of [error description] in ggml-metal.m. Diff:

diff --git a/ggml-metal.m b/ggml-metal.m
index b3c463f..9fed60d 100644
--- a/ggml-metal.m
+++ b/ggml-metal.m
@@ -244,7 +244,7 @@ static void ggml_metal_log(enum ggml_log_level level, const char* format, ...){
                 (int) ctx->pipeline_##name.maxTotalThreadsPerThreadgroup, \
                 (int) ctx->pipeline_##name.threadExecutionWidth); \
         if (error) { \
-          GGML_METAL_LOG_ERROR("%s: error: load pipeline error: %s\n", __func__, [[error description] UTF8String]); \
+          GGML_METAL_LOG_ERROR("%s: error: load pipeline error(%d, %s): %s\n", __func__, [error code], [[error domain] UTF8String], [[error localizedDescription] UTF8String]); \
             return NULL; \
         }

With that change I got this output:

ggml_metal_init: loaded kernel_mul_mm_f32_f32                         0x0 | th_max =    0 | th_width =    0
ggml_metal_init: error: load pipeline error(2, CompilerError): SC compilation failure
There is a call to an undefined label
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model 'models/llama-2-7b.Q4_K_M.gguf'
main: error: unable to load model

I'm not familiar with this, so I don't know what SC compilation failure means.

@ggerganov
Copy link
Owner

Was it running before? If yes, can you trace which commit started to produce the error?

@petterik
Copy link

petterik commented Oct 3, 2023

metal has not been working for me and I started using llama.ccp as of last week.

I tried a git bisect between master and when gguf was supported 6381d4e, testing each commit with this command:

make clean; make -j 32; ./main -m models/llama-2-7b.Q4_K_M.gguf -n 128 -ngl 30 -p "Test:"

The first bad commit was the one where metal was enabled by default, so presumably, metal was disabled whenever my command worked:

commit e36ecdccc8754783f93ad3ac8a09e540101f2ca0
Author: Georgi Gerganov <ggerganov@gmail.com>
Date:   Mon Sep 4 22:26:24 2023 +0300

    build : on Mac OS enable Metal by default (#2901)

@ggerganov
Copy link
Owner

I'm not sure if Metal ever worked on Mac Intel. If it has, we should see if we can fix it to work again.
Otherwise, you have to run with either -ngl 0 or build with LLAMA_NO_METAL

@estibi
Copy link

estibi commented Oct 7, 2023

@ggerganov Metal doesn't work on Mac Intel. It always crashes with

ggml_metal_init: loaded kernel_mul_mat_q3_K_f32            0x7fedb6916910 | th_max =  512 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q4_K_f32            0x7fedb6917110 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32            0x7fedb6917910 | th_max =  768 | th_width =   64
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32            0x7fedb6918290 | th_max = 1024 | th_width =   64
ggml_metal_init: loaded kernel_mul_mm_f32_f32                         0x0 | th_max =    0 | th_width =    0
(null): error: load pipeline error: 
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model '/Users/estibi/models/mistral-7b-v0.1.Q8_0.gguf'
main: error: unable to load model

@HenkPoley
Copy link
Contributor

HenkPoley commented Oct 18, 2023

For people with Apple Silicon getting the "failed to create context with model" error, who fell back on make LLAMA_NO_METAL=1 to getting at least something to work.

If you run conda the Python package manager. E.g. your command prompt starts with "(base)". Then maybe something in the anaconda/miniconda3 environment is interfering.

Run:

conda deactivate
make clean
make

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
macos Issues specific to macOS
Projects
None yet
Development

No branches or pull requests

6 participants