Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zjunlp/CaMA-13B-LoRA #70

Closed
Hubotcoder opened this issue Sep 26, 2023 · 4 comments
Closed

zjunlp/CaMA-13B-LoRA #70

Hubotcoder opened this issue Sep 26, 2023 · 4 comments

Comments

@Hubotcoder
Copy link

Does the full parameter model 'knowlm-13b-zhixi' need to load lora?
And I can't find 'zjunlp/CaMA-13B-LoRA' on huggingface.co.

(zhixi) root@189e68eaf90d:/app/KnowLM# CUDA_VISIBLE_DEVICES=0 python examples/generate_lora_web.py --base_model ./zjunlp_knowlm-13b-zhixi

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

/opt/conda/envs/zhixi/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /opt/conda/envs/zhixi did not contain libcudart.so as expected! Searchin further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
/opt/conda/envs/zhixi/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be on-existent: {PosixPath('/usr/local/cuda/lib64')}
warn(msg)
ERROR: python: undefined symbol: cudaRuntimeGetVersion
CUDA SETUP: libcudart.so path is None
CUDA SETUP: Is seems that your cuda installation is not in your path. See bitsandbytes-foundation/bitsandbytes#85 for more information.
CUDA SETUP: CUDA version lower than 11 are currently not supported for LLM.int8(). You will be only to use 8-bit optimizers and quantization routines!!
/opt/conda/envs/zhixi/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package anaconda)!
warn(msg)
CUDA SETUP: Highest compute capability among GPUs detected: 8.9
CUDA SETUP: Detected CUDA version 00
CUDA SETUP: Loading binary /opt/conda/envs/zhixi/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cpu.so...
/opt/conda/envs/zhixi/lib/python3.9/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit ptimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [02:01<00:00, 40.42s/it
'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /zjunlp/CaMA-13B-LoRA/resolve/main/adapter_config.json (Caused by ConnctTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f482d22e340>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 83829c70-c0f-439d-882b-1a79f4131ead)')' thrown while requesting HEAD https://huggingface.co/zjunlp/CaMA-13B-LoRA/resolve/main/adapter_config.json
Traceback (most recent call last):
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/peft/utils/config.py", line 99, in from_pretrained
config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME)
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1291, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internt connection is on.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/app/KnowLM/examples/generate_lora_web.py", line 209, in
fire.Fire(main)
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/app/KnowLM/examples/generate_lora_web.py", line 48, in main
model = PeftModel.from_pretrained(
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/peft/peft_model.py", line 135, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[PeftConfig.from_pretrained(model_id).peft_type].from_pretrained(model_id)
File "/opt/conda/envs/zhixi/lib/python3.9/site-packages/peft/utils/config.py", line 101, in from_pretrained
raise ValueError(f"Can't find config.json at '{pretrained_model_name_or_path}'")
ValueError: Can't find config.json at 'zjunlp/CaMA-13B-LoRA'

@MikeDean2367
Copy link
Collaborator

Hi, the model knowlm-13b-zhixi does not necessitate the loading of lora weights, as it has already integrated lora weights. Judging from the information you've provided, it seems that you are running the generate_lora_web.py script, and within this script, it appears that the variable lora_weights is not being utilized. Have you made any changes to this file?

Please let me know if you have any other questions :)

@Hubotcoder
Copy link
Author

Hi, the model knowlm-13b-zhixi does not necessitate the loading of lora weights, as it has already integrated lora weights. Judging from the information you've provided, it seems that you are running the generate_lora_web.py script, and within this script, it appears that the variable lora_weights is not being utilized. Have you made any changes to this file?

Please let me know if you have any other questions :)

Hi, thank you for your response. I did not make any change to this file. I just pulled the docker image and ran the file in the container.

@MikeDean2367
Copy link
Collaborator

Upon reviewing the error messages, it appears there are two issues. The first error pertains to a network problem, while the second error points to a problem on line 48 of the generate_lora_web.py. However, upon examining the code, it seems that line 48 is commented out. Please verify whether line 48 in that file is, in fact, commented out.

@Hubotcoder
Copy link
Author

Upon reviewing the error messages, it appears there are two issues. The first error pertains to a network problem, while the second error points to a problem on line 48 of the generate_lora_web.py. However, upon examining the code, it seems that line 48 is commented out. Please verify whether line 48 in that file is, in fact, commented out.

Thank you! I commented line 48-52. It really works!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants