Replies: 9 comments 18 replies
-
So KoboldCPP-v1.76.yr0 works for you, and KoboldCPP-v1.76.yr1 does not? |
Beta Was this translation helpful? Give feedback.
-
Owner of a 6750XT, can confirm this is exactly the case too. Same error. |
Beta Was this translation helpful? Give feedback.
-
I'll look into it more this evening but it doesn't really make sense to me why there's an issue because the only change I made from 1.76 to 1.76.yr1 was using the official ROCM GPU files 🥲🕵️🖥️ and they provide gfx1031 files.. I guess the ones I compiled had a magic touch in them 😜 |
Beta Was this translation helpful? Give feedback.
-
let me know if this version works for you guys https://github.com/YellowRoseCx/koboldcpp-rocm/releases/tag/v1.79.1.yr0-ROCm |
Beta Was this translation helpful? Give feedback.
-
TY - but still getting the error [6700XT] ROCm error: CUBLAS_STATUS_INTERNAL_ERROR |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Same problem whit a 6600XT since the 1.74 version, download the 1.79.1yr1 its working now, Thank you. awesome work!! |
Beta Was this translation helpful? Give feedback.
-
I hate to necro this, but I think we might be back to the same issue with v 1.80.3.yr0. It closes almost immediately upon first message sent. |
Beta Was this translation helpful? Give feedback.
-
Issues also here with 6750XT. Up until 1.79.1 it's working fine, but from 1.80 on it gives different errors.
Using Python 3.12.3 with a Ryzen 5600x and AMD 6750XT.
|
Beta Was this translation helpful? Give feedback.
-
6700XT only works up to KoboldCPP-v1.76.yr0-ROCm, anything beyond that and i get the following error:
ROCm error: CUBLAS_STATUS_INTERNAL_ERROR
current device: 0, in function ggml_cuda_mul_mat_batched_cublas at D:/a/koboldcpp-rocm/koboldcpp-rocm/ggml/src/ggml-cuda.cu:1881
hipblasGemmBatchedEx(ctx.cublas_handle(), HIPBLAS_OP_T, HIPBLAS_OP_N, ne01, ne11, ne10, alpha, (const void **) (ptrs_src.get() + 0ne23), HIPBLAS_R_16F, nb01/nb00, (const void **) (ptrs_src.get() + 1ne23), HIPBLAS_R_16F, nb11/nb10, beta, ( void **) (ptrs_dst.get() + 0*ne23), cu_data_type, ne01, ne23, cu_compute_type, HIPBLAS_GEMM_DEFAULT)
D:/a/koboldcpp-rocm/koboldcpp-rocm/ggml/src/ggml-cuda.cu:72: ROCm error
Beta Was this translation helpful? Give feedback.
All reactions