From f440cb4fbaaca86d18f120061a37d91d02087c19 Mon Sep 17 00:00:00 2001 From: Jason Dai Date: Tue, 6 Feb 2024 12:59:17 +0800 Subject: [PATCH] Update Self-Speculative Decoding Readme (#10102) --- README.md | 1 + .../Inference/Self_Speculative_Decoding.md | 23 +++++++++++++++++++ docs/readthedocs/source/index.rst | 1 + 3 files changed, 25 insertions(+) create mode 100644 docs/readthedocs/source/doc/LLM/Inference/Self_Speculative_Decoding.md diff --git a/README.md b/README.md index 86b6ebd20e6..8d802cdfa92 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,7 @@ > *It is built on the excellent work of [llama.cpp](https://github.com/ggerganov/llama.cpp), [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), [qlora](https://github.com/artidoro/qlora), [gptq](https://github.com/IST-DASLab/gptq), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [awq](https://github.com/mit-han-lab/llm-awq), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [vLLM](https://github.com/vllm-project/vllm), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [gptq_for_llama](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [chatglm.cpp](https://github.com/li-plus/chatglm.cpp), [redpajama.cpp](https://github.com/togethercomputer/redpajama.cpp), [gptneox.cpp](https://github.com/byroneverson/gptneox.cpp), [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp/), etc.* ### Latest update 🔥 +- [2024/02] `bigdl-llm` now supports *[Self-Speculative Decoding](https://bigdl.readthedocs.io/en/main/doc/LLM/Inference/Self_Speculative_Decoding.html)*, which in practice brings **~30% speedup** for FP16 and BF16 inference latency on Intel [GPU](python/llm/example/GPU/Speculative-Decoding) and [CPU](python/llm/example/CPU/Speculative-Decoding) respectively - [2024/02] `bigdl-llm` now supports a comprehensive list of LLM finetuning on Intel GPU (including [LoRA](python/llm/example/GPU/LLM-Finetuning/LoRA), [QLoRA](python/llm/example/GPU/LLM-Finetuning/QLoRA), [DPO](python/llm/example/GPU/LLM-Finetuning/DPO), [QA-LoRA](python/llm/example/GPU/LLM-Finetuning/QA-LoRA) and [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora)) - [2024/01] 🔔🔔🔔 ***The default `bigdl-llm` GPU Linux installation has switched from PyTorch 2.0 to PyTorch 2.1, which requires new oneAPI and GPU driver versions. (See the [GPU installation guide](https://bigdl.readthedocs.io/en/latest/doc/LLM/Overview/install_gpu.html) for more details.)*** - [2023/12] `bigdl-llm` now supports [ReLoRA](python/llm/example/GPU/LLM-Finetuning/ReLora) (see *["ReLoRA: High-Rank Training Through Low-Rank Updates"](https://arxiv.org/abs/2307.05695)*) diff --git a/docs/readthedocs/source/doc/LLM/Inference/Self_Speculative_Decoding.md b/docs/readthedocs/source/doc/LLM/Inference/Self_Speculative_Decoding.md new file mode 100644 index 00000000000..403763422f8 --- /dev/null +++ b/docs/readthedocs/source/doc/LLM/Inference/Self_Speculative_Decoding.md @@ -0,0 +1,23 @@ +# Self-Speculative Decoding + +### Speculative Decoding in Practice +In [speculative](https://arxiv.org/abs/2302.01318) [decoding](https://arxiv.org/abs/2211.17192), a small (draft) model quickly generates multiple draft tokens, which are then verified in parallel by the large (target) model. While speculative decoding can effectively speed up the target model, ***in practice it is difficult to maintain or even obtain a proper draft model***, especially when the target model is finetuned with customized data. + +### Self-Speculative Decoding +Built on top of the concept of “[self-speculative decoding](https://arxiv.org/abs/2309.08168)”, BigDL-LLM can now accelerate the original FP16 or BF16 model ***without the need of a separate draft model or model finetuning***; instead, it automatically converts the original model to INT4, and uses the INT4 model as the draft model behind the scene. In practice, this brings ***~30% speedup*** for FP16 and BF16 LLM inference latency on Intel GPU and CPU respectively. + +### Using BigDL-LLM Self-Speculative Decoding +Please refer to BigDL-LLM self-speculative decoding code snippets below, and the complete [GPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/Speculative-Decoding) and [CPU](https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/Speculative-Decoding) examples in the project repo. + +```python +model = AutoModelForCausalLM.from_pretrained(model_path, + optimize_model=True, + torch_dtype=torch.float16, #use bfloat16 on cpu + load_in_low_bit="fp16", #use bf16 on cpu + speculative=True, #set speculative to true + trust_remote_code=True, + use_cache=True) +output = model.generate(input_ids, + max_new_tokens=args.n_predict, + do_sample=False) +``` diff --git a/docs/readthedocs/source/index.rst b/docs/readthedocs/source/index.rst index 5ba26500f10..e5b900b112b 100644 --- a/docs/readthedocs/source/index.rst +++ b/docs/readthedocs/source/index.rst @@ -24,6 +24,7 @@ BigDL-LLM: low-Bit LLM library ============================================ Latest update 🔥 ============================================ +- [2024/02] ``bigdl-llm`` now supports `Self-Speculative Decoding `_, which in practice brings **~30% speedup** for FP16 and BF16 inference latency on Intel `GPU `_ and `CPU `_ respectively - [2024/02] ``bigdl-llm`` now supports a comprehensive list of LLM finetuning on Intel GPU (including `LoRA `_, `QLoRA `_, `DPO `_, `QA-LoRA `_ and `ReLoRA `_) - [2024/01] 🔔🔔🔔 **The default** ``bigdl-llm`` **GPU Linux installation has switched from PyTorch 2.0 to PyTorch 2.1, which requires new oneAPI and GPU driver versions. (See the** `GPU installation guide `_ **for more details.)** - [2023/12] ``bigdl-llm`` now supports `ReLoRA `_ (see `"ReLoRA: High-Rank Training Through Low-Rank Updates" `_)