-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-org XPU finetune images #10971
Re-org XPU finetune images #10971
Conversation
|
||
### 1. Prepare Docker Image | ||
|
||
You can download directly from Dockerhub like: | ||
|
||
```bash | ||
docker pull intelanalytics/ipex-llm-finetune-qlora-xpu:2.1.0-SNAPSHOT | ||
docker pull intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not use ipex-llm-finetune-qlora-xpu as image name here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This image supports qlora, lora and qalora etc. Qlora cannot stand for our example anymore.
docker/llm/finetune/xpu/README.md
Outdated
|
||
The following shows how to fine-tune LLM with Quantization (QLoRA built on IPEX-LLM 4bit optimizations) in a docker environment, which is accelerated by Intel XPU. | ||
The following shows how to finetune LLM with Quantization (QLoRA built on IPEX-LLM 4bit optimizations) in a docker environment, which is accelerated by Intel XPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove (QLoRA built on IPEX-LLM 4bit optimizations)
here or make it more generic since this image supports various optimizations for QLoRA, LoRA, QALoRA, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Added examples link in README.
need a quick start for "Finetune with docker on (multi-)GPUs" |
-e http_proxy=${HTTP_PROXY} \ | ||
-e https_proxy=${HTTPS_PROXY} \ | ||
-v $BASE_MODE_PATH:/model \ | ||
-v $DATA_PATH:/data/alpaca-cleaned \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the dataset always alpaca-cleaned?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. We can only use alpaca-clean as an example.
Then, start QA-LoRA fine-tuning: | ||
|
||
```bash | ||
bash qalora_finetune_llama2_7b_arc_1_card.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same name style as "start-qlora-finetuning-on-xpu"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script is from https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/LLM-Finetuning/QA-LoRA/qalora_finetune_llama2_7b_arc_1_card.sh.
We can provide a copy if customer want a script for this example.
Then, start QA-LoRA fine-tuning: | ||
|
||
```bash | ||
accelerate launch finetune.py lora.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also provide a script like start-qlora-finetuning-on-xpu?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After merging images into one, we have too many examples. We can provide scripts when the customer requires them.
Yes. We can open another PR for this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
ipex-llm-finetune-qlora-xpu
toipex-llm-finetune-xpu
.1. Why the change?
2. User API changes
3. Summary of the change
4. How to test?
5. New dependencies
- Dependency1
- Dependency2
- ...
- Dependency1 and license1
- Dependency2 and license2
- ...