We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I saw your code referring to PD disaggragate. Please tell me how to use it
The text was updated successfully, but these errors were encountered:
demo start args.
python -m lightllm.server.api_server --model_dir /dev/shm/llama2-7b --run_mode "pd_master" --host hostname -i --port 60011
hostname -i
nvidia-cuda-mps-control -d KV_TRANS_USE_P2P=1 LOADWORKER=1 python -m lightllm.server.api_server --model_dir /dev/shm/llama2-7b --run_mode "prefill" --host hostname -i --port 8017 --tp 4 --nccl_port 2732 --max_total_token_num 400000 --tokenizer_mode fast --pd_master_ip 10.121.4.14 \ --pd_master_port 60011 --use_dynamic_prompt_cache --max_req_total_len 16000 --running_max_req_size 128 --disable_cudagraph
nvidia-cuda-mps-control -d CUDA_VISIBLE_DEVICES=4,5,6,7 KV_TRANS_USE_P2P=1 LOADWORKER=10 python -m lightllm.server.api_server --model_dir /dev/shm/llama2-7b --run_mode "decode" --host hostname -i --port 8118 --nccl_port 12322 --tp 4 --max_total_token_num 400000 --graph_max_len_in_batch 2048 --graph_max_batch_size 16 --tokenizer_mode fast --pd_master_ip 10.121.4.14 --pd_master_port 60011 --use_dynamic_prompt_cache
not all model and run mode suppport pd.
Sorry, something went wrong.
No branches or pull requests
I saw your code referring to PD disaggragate. Please tell me how to use it
The text was updated successfully, but these errors were encountered: