-
Notifications
You must be signed in to change notification settings - Fork 6
/
Copy pathstdall
168 lines (163 loc) · 15.3 KB
/
stdall
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
Requirement already satisfied: numpy~=1.24.4 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 1)) (1.24.4)
Requirement already satisfied: sentencepiece~=0.2.0 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 2)) (0.2.0)
Requirement already satisfied: transformers<5.0.0,>=4.40.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (4.40.1)
Requirement already satisfied: gguf>=0.1.0 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 4)) (0.9.0)
Requirement already satisfied: protobuf<5.0.0,>=4.21.0 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 5)) (4.25.3)
Requirement already satisfied: torch~=2.1.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from -r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.1.2)
Requirement already satisfied: filelock in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (3.13.1)
Requirement already satisfied: regex!=2019.12.17 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2023.12.25)
Requirement already satisfied: tokenizers<0.20,>=0.19 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (0.19.1)
Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (0.20.3)
Requirement already satisfied: requests in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2.31.0)
Requirement already satisfied: safetensors>=0.4.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (0.4.2)
Requirement already satisfied: pyyaml>=5.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (6.0.1)
Requirement already satisfied: tqdm>=4.27 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (4.66.2)
Requirement already satisfied: packaging>=20.0 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (23.2)
Requirement already satisfied: fsspec in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2024.2.0)
Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (11.0.2.54)
Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (8.9.2.26)
Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (11.4.5.107)
Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: sympy in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (1.12)
Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.0.106)
Requirement already satisfied: jinja2 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (3.1.3)
Requirement already satisfied: networkx in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (3.2.1)
Requirement already satisfied: typing-extensions in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (4.9.0)
Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (10.3.2.106)
Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.18.1)
Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.105)
Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.1.3.1)
Requirement already satisfied: triton==2.1.0 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.1.0)
Requirement already satisfied: nvidia-nvjitlink-cu12 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107->torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (12.3.101)
Requirement already satisfied: MarkupSafe>=2.0 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from jinja2->torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (2.1.5)
Requirement already satisfied: certifi>=2017.4.17 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2024.2.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from requests->transformers<5.0.0,>=4.40.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert.txt (line 3)) (2.2.1)
Requirement already satisfied: mpmath>=0.19 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from sympy->torch~=2.1.1->-r /home/ggml/work/llama.cpp/./requirements/requirements-convert-hf-to-gguf.txt (line 2)) (1.3.0)
Obtaining file:///home/ggml/work/llama.cpp/gguf-py
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Checking if build backend supports build_editable: started
Checking if build backend supports build_editable: finished with status 'done'
Getting requirements to build editable: started
Getting requirements to build editable: finished with status 'done'
Preparing editable metadata (pyproject.toml): started
Preparing editable metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: tqdm>=4.27 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from gguf==0.9.0) (4.66.2)
Requirement already satisfied: numpy>=1.17 in /mnt/llama.cpp/venv/lib/python3.10/site-packages (from gguf==0.9.0) (1.24.4)
Building wheels for collected packages: gguf
Building editable for gguf (pyproject.toml): started
Building editable for gguf (pyproject.toml): finished with status 'done'
Created wheel for gguf: filename=gguf-0.9.0-py3-none-any.whl size=3289 sha256=701cc7c488d402c0e365bf43e84e57e5e97e1457a1f9d18440c8a81f13890b31
Stored in directory: /tmp/pip-ephem-wheel-cache-70xldpr3/wheels/a3/4c/52/c5934ad001d1a70ca5434f11ddc622cad9c0a484e9bf6feda3
Successfully built gguf
Installing collected packages: gguf
Attempting uninstall: gguf
Found existing installation: gguf 0.9.0
Uninstalling gguf-0.9.0:
Successfully uninstalled gguf-0.9.0
Successfully installed gguf-0.9.0
+ gg_run_ctest_debug
+ cd /home/ggml/work/llama.cpp
+ rm -rf build-ci-debug
+ tee /home/ggml/results/llama.cpp/d8/ee90222791afff2ab666ded4cb6195fd94cced/ggml-4-x86-cuda-v100/ctest_debug.log
+ mkdir build-ci-debug
+ cd build-ci-debug
+ set -e
+ tee -a /home/ggml/results/llama.cpp/d8/ee90222791afff2ab666ded4cb6195fd94cced/ggml-4-x86-cuda-v100/ctest_debug-cmake.log
+ cmake -DCMAKE_BUILD_TYPE=Debug -DLLAMA_FATAL_WARNINGS=ON -DLLAMA_CUDA=1 ..
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.34.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Found CUDAToolkit: /usr/local/cuda-12.2/include (found version "12.2.140")
-- CUDA found
-- The CUDA compiler identification is NVIDIA 12.2.140
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-12.2/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Using CUDA architectures: 52;61;70
-- CUDA host compiler is GNU 11.4.0
-- ccache found, compilation results will be cached. Disable with LLAMA_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring done (3.2s)
-- Generating done (0.2s)
-- Build files have been written to: /home/ggml/work/llama.cpp/build-ci-debug
real 0m3.414s
user 0m2.578s
sys 0m0.832s
+ tee -a /home/ggml/results/llama.cpp/d8/ee90222791afff2ab666ded4cb6195fd94cced/ggml-4-x86-cuda-v100/ctest_debug-make.log
+ make -j
[ 1%] Generating build details from Git
[ 2%] Building C object CMakeFiles/ggml.dir/ggml.c.o
[ 2%] Building C object CMakeFiles/ggml.dir/ggml-alloc.c.o
[ 3%] Building C object CMakeFiles/ggml.dir/ggml-backend.c.o
[ 3%] Building C object CMakeFiles/ggml.dir/ggml-quants.c.o
-- Found Git: /usr/bin/git (found version "2.34.1")
[ 4%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/arange.cu.o
[ 4%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/clamp.cu.o
[ 5%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/concat.cu.o
[ 6%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/acc.cu.o
[ 6%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/argsort.cu.o
[ 6%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/convert.cu.o
[ 7%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/binbcast.cu.o
[ 8%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/cpy.cu.o
[ 8%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/dmmv.cu.o
[ 9%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/diagmask.cu.o
[ 10%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/fattn-tile-f16.cu.o
[ 11%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/fattn.cu.o
[ 12%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/fattn-vec-f16.cu.o
[ 12%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/fattn-tile-f32.cu.o
[ 12%] Building CXX object common/CMakeFiles/build_info.dir/build-info.cpp.o
[ 13%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/im2col.cu.o
[ 13%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/fattn-vec-f32.cu.o
[ 13%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/getrows.cu.o
[ 14%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/mmq.cu.o
[ 14%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/mmvq.cu.o
[ 14%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/pad.cu.o
[ 15%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/pool2d.cu.o
[ 16%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/norm.cu.o
[ 17%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/rope.cu.o
[ 17%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/quantize.cu.o
[ 17%] Built target build_info
[ 18%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/softmax.cu.o
[ 18%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/scale.cu.o
[ 19%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/sumrows.cu.o
[ 20%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/unary.cu.o
[ 20%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/tsembd.cu.o
[ 20%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda/upscale.cu.o
[ 21%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o
[ 21%] Building CXX object CMakeFiles/ggml.dir/sgemm.cpp.o
/home/ggml/work/llama.cpp/ggml-cuda/mmq.cu(1185): error #177-D: function "get_arch_config_device" was declared but never referenced
mmq_arch_config_t get_arch_config_device(mmq_config_t mmq_config) {
^
Remark: The warnings can be suppressed with "-diag-suppress <warning-number>"
1 error detected in the compilation of "/home/ggml/work/llama.cpp/ggml-cuda/mmq.cu".
make[2]: *** [CMakeFiles/ggml.dir/build.make:388: CMakeFiles/ggml.dir/ggml-cuda/mmq.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:820: CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
real 0m2.485s
user 0m2.489s
sys 0m0.630s
+ cur=2
+ echo 2
+ set +x
cat: /home/ggml/results/llama.cpp/d8/ee90222791afff2ab666ded4cb6195fd94cced/ggml-4-x86-cuda-v100/ctest_debug-ctest.log: No such file or directory