Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vulkan: matmul dequantization improvements #12015

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

netrunnereve
Copy link
Collaborator

This basically makes the mul_mm shaders load and dequantize 4 or 8 values at a time like how it's done in mat_vec (old quants only).

Results on my RX 470:

PR

model size params backend ngl threads main_gpu sm test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 100 8 1 none pp512 158.37 ± 0.80
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 100 8 1 none pp512 153.76 ± 0.52
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   38 runs - 26996.37 us/run -  60.13 GFLOP/run -   2.23 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   38 runs - 26764.32 us/run -  60.13 GFLOP/run -   2.25 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   34 runs - 30210.91 us/run -  60.13 GFLOP/run -   1.99 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   36 runs - 29015.64 us/run -  60.13 GFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   36 runs - 27984.17 us/run -  60.13 GFLOP/run -   2.15 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 36 runs - 28179.08 us/run -  60.13 GFLOP/run -   2.13 TFLOPS

Master
PR:

model size params backend ngl threads main_gpu sm test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 100 8 1 none pp512 151.66 ± 0.86
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 100 8 1 none pp512 149.71 ± 0.14
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   36 runs - 28187.53 us/run -  60.13 GFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   36 runs - 28343.00 us/run -  60.13 GFLOP/run -   2.12 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   32 runs - 31629.72 us/run -  60.13 GFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   34 runs - 30898.97 us/run -  60.13 GFLOP/run -   1.95 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   36 runs - 28930.81 us/run -  60.13 GFLOP/run -   2.08 TFLOPS
  MUL_MAT(type_a=iq4_nl,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                 36 runs - 28959.25 us/run -  60.13 GFLOP/run -   2.08 TFLOPS

I'm only seeing a small improvement as most of the GPU time is spent doing the actual multiplication, and I think we'll see better results on something that supports coopmat.

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Feb 21, 2025
@jeffbolznv
Copy link
Collaborator

I did a quick run on RTX 4070 using the KHR_coopmat path (GGML_VK_DISABLE_COOPMAT2=1). Perf is about neutral on average, maybe down a tiny bit?

before
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   332 runs -  3023.55 us/run -  60.13 GFLOP/run -  19.89 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   322 runs -  3114.34 us/run -  60.13 GFLOP/run -  19.31 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  776 runs -  1289.13 us/run -  60.13 GFLOP/run -  46.64 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  748 runs -  1338.91 us/run -  60.13 GFLOP/run -  44.91 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  674 runs -  1485.07 us/run -  60.13 GFLOP/run -  40.49 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  670 runs -  1493.24 us/run -  60.13 GFLOP/run -  40.27 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  632 runs -  1585.79 us/run -  60.13 GFLOP/run -  37.92 TFLOPS
  
after
  MUL_MAT(type_a=f32,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   322 runs -  3118.21 us/run -  60.13 GFLOP/run -  19.28 TFLOPS
  MUL_MAT(type_a=f16,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   320 runs -  3138.63 us/run -  60.13 GFLOP/run -  19.16 TFLOPS
  MUL_MAT(type_a=q4_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  734 runs -  1365.62 us/run -  60.13 GFLOP/run -  44.03 TFLOPS
  MUL_MAT(type_a=q4_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  660 runs -  1515.89 us/run -  60.13 GFLOP/run -  39.67 TFLOPS
  MUL_MAT(type_a=q5_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  710 runs -  1409.35 us/run -  60.13 GFLOP/run -  42.66 TFLOPS
  MUL_MAT(type_a=q5_1,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  708 runs -  1414.56 us/run -  60.13 GFLOP/run -  42.51 TFLOPS
  MUL_MAT(type_a=q8_0,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  650 runs -  1542.48 us/run -  60.13 GFLOP/run -  38.98 TFLOPS

The backend tests all passed.

@netrunnereve
Copy link
Collaborator Author

Perf is about neutral on average, maybe down a tiny bit?

Interesting. Let's wait for some more results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants