You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i find it's always error when i use glli/ART_GuideModel as llava_lora_path in "def build_llava".
then i find "model_name=get_model_name_from_path(llava_lora_path)",which actually means "model_name = ART_GuideModel"
however, in "def load_pretrained_model ",if i want to load llava model fine-tuned by lora, model_name should contain "llava" and "lora".
Should I change model_name?
The text was updated successfully, but these errors were encountered:
Yes, you are right. When we upload our lora model to the huggingface, we forget that the llava builder has a strict name matching. So, please simply rename the folder from ART_GuideModel to something like llava-lora-mistral, which is used in our local machine.
We will add a note in the ReadMe. Thanks for pointing out it.
i find it's always error when i use glli/ART_GuideModel as llava_lora_path in "def build_llava".
then i find "model_name=get_model_name_from_path(llava_lora_path)",which actually means "model_name = ART_GuideModel"
however, in "def load_pretrained_model ",if i want to load llava model fine-tuned by lora, model_name should contain "llava" and "lora".
Should I change model_name?
The text was updated successfully, but these errors were encountered: