KoboldCpp - Version 1.76.yr1-ROCm stopped working using HipBLAS #76
-
Hi there, Been using for a while, when it all of a sudden stopped working. (AMD R7 5800XT + RX 7800 XT) Not sure what happened. Runs just fine if I select Vulcan - but HipBlas stopped working today. Error I'm getting is: `Loading model: E:\Mod Organizer 2\Skyrim SE\Tools\KoboldCPP\Chimera-Apex-7B.Q6_K.gguf The reported GGUF Arch is: llama Identified as GGUF model: (ver 6) Using automatic RoPE scaling for GGUF. If the model has custom RoPE settings, they'll be used directly instead! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
OK IT seems my problem is the same one as this one: #58 Using [KoboldCPP-v1.67.yr0-ROCm] fixes issue and is also lightning fast. Using the allv3 exe for my RX 7800 XT Seems that versions after this has an issue with Lama-based models. |
Beta Was this translation helpful? Give feedback.
OK IT seems my problem is the same one as this one: #58
Using [KoboldCPP-v1.67.yr0-ROCm] fixes issue and is also lightning fast. Using the allv3 exe for my RX 7800 XT
Seems that versions after this has an issue with Lama-based models.