flashinfer is a kernel library for LLM serving. It can be used in SGLang runtime to accelerate attention computation.
Note: The compilation can take a very long time.
git submodule update --init --recursive
pip install 3rdparty/flashinfer/python
Add --model-mode flashinfer
argument to enable flashinfer when launching a server.
Example:
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --model-mode flashinfer