Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-org XPU finetune images #10971

Merged
merged 10 commits into from
May 15, 2024
Merged

Re-org XPU finetune images #10971

merged 10 commits into from
May 15, 2024

Conversation

qiyuangong
Copy link
Contributor

@qiyuangong qiyuangong commented May 9, 2024

Description

  • Rename xpu finetune image from ipex-llm-finetune-qlora-xpu to ipex-llm-finetune-xpu.
  • Add axolotl to xpu finetune image.
  • Upgrade peft to 0.10.0, transformers to 4.36.0.
  • Add accelerate default config to home.

1. Why the change?

2. User API changes

3. Summary of the change

4. How to test?

  • N/A
  • Unit test
  • Application test
  • Document test
  • ...

5. New dependencies

  • New Python dependencies
    - Dependency1
    - Dependency2
    - ...
  • New Java/Scala dependencies and their license
    - Dependency1 and license1
    - Dependency2 and license2
    - ...

@qiyuangong qiyuangong marked this pull request as draft May 9, 2024 06:53
@qiyuangong qiyuangong changed the title Re-org finetune images Re-org XPU finetune images May 13, 2024
@qiyuangong qiyuangong self-assigned this May 13, 2024
@qiyuangong qiyuangong marked this pull request as ready for review May 13, 2024 01:39

### 1. Prepare Docker Image

You can download directly from Dockerhub like:

```bash
docker pull intelanalytics/ipex-llm-finetune-qlora-xpu:2.1.0-SNAPSHOT
docker pull intelanalytics/ipex-llm-finetune-xpu:2.1.0-SNAPSHOT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use ipex-llm-finetune-qlora-xpu as image name here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image supports qlora, lora and qalora etc. Qlora cannot stand for our example anymore.


The following shows how to fine-tune LLM with Quantization (QLoRA built on IPEX-LLM 4bit optimizations) in a docker environment, which is accelerated by Intel XPU.
The following shows how to finetune LLM with Quantization (QLoRA built on IPEX-LLM 4bit optimizations) in a docker environment, which is accelerated by Intel XPU.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove (QLoRA built on IPEX-LLM 4bit optimizations) here or make it more generic since this image supports various optimizations for QLoRA, LoRA, QALoRA, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Added examples link in README.

@glorysdj
Copy link
Contributor

need a quick start for "Finetune with docker on (multi-)GPUs"
https://github.com/analytics-zoo/nano/issues/1361#issuecomment-2105506189

-e http_proxy=${HTTP_PROXY} \
-e https_proxy=${HTTPS_PROXY} \
-v $BASE_MODE_PATH:/model \
-v $DATA_PATH:/data/alpaca-cleaned \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the dataset always alpaca-cleaned?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. We can only use alpaca-clean as an example.

Then, start QA-LoRA fine-tuning:

```bash
bash qalora_finetune_llama2_7b_arc_1_card.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same name style as "start-qlora-finetuning-on-xpu"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then, start QA-LoRA fine-tuning:

```bash
accelerate launch finetune.py lora.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also provide a script like start-qlora-finetuning-on-xpu?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After merging images into one, we have too many examples. We can provide scripts when the customer requires them.

@qiyuangong
Copy link
Contributor Author

need a quick start for "Finetune with docker on (multi-)GPUs" analytics-zoo/nano#1361 (comment)

Yes. We can open another PR for this.

Copy link
Contributor

@glorysdj glorysdj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@qiyuangong qiyuangong merged commit 1e00bd7 into intel:main May 15, 2024
@qiyuangong qiyuangong deleted the finetunexpu branch May 15, 2024 01:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants