Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Later version have degradation based on vllm:time_to_first_token_seconds_sum metric #8819

Closed
1 task done
oandreeva-nv opened this issue Sep 25, 2024 · 4 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@oandreeva-nv
Copy link

oandreeva-nv commented Sep 25, 2024

Your current environment

The output of `python collect_env.py`
GPU NVIDIA RTX 5880

Model Input Dumps

No response

🐛 Describe the bug

I've noticed a degradation after vllm v0.5.3.post1. For example for a simple model facebook/opt-125m start server with:

python3 -m vllm.entrypoints.openai.api_server --model facebook/opt-125m

send a request and query metrics:

### curl http://127.0.0.1:8000/metrics 
INFO:     127.0.0.1:55690 - "GET /metrics HTTP/1.1" 200 OK
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 11541.0
python_gc_objects_collected_total{generation="1"} 10139.0
python_gc_objects_collected_total{generation="2"} 2032.0
# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 1240.0
python_gc_collections_total{generation="1"} 111.0
python_gc_collections_total{generation="2"} 79.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="10",patchlevel="12",version="3.10.12"} 1.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 6.9204963328e+010
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 7.972794368e+09
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.72729061615e+09
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 30.3
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 79.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP vllm:cache_config_info information of cache_config
# TYPE vllm:cache_config_info gauge
vllm:cache_config_info{block_size="16",cache_dtype="auto",cpu_offload_gb="0",enable_prefix_caching="False",gpu_memory_utilization="0.9",num_cpu_blocks="7281",num_gpu_blocks="76416",num_gpu_blocks_override="None",sliding_window="None",swap_space_bytes="4294967296"} 1.0
# HELP vllm:num_requests_running Number of requests currently running on GPU.
# TYPE vllm:num_requests_running gauge
vllm:num_requests_running{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_requests_waiting Number of requests waiting to be processed.
# TYPE vllm:num_requests_waiting gauge
vllm:num_requests_waiting{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_requests_swapped Number of requests swapped to CPU.
# TYPE vllm:num_requests_swapped gauge
vllm:num_requests_swapped{model_name="facebook/opt-125m"} 0.0
# HELP vllm:gpu_cache_usage_perc GPU KV-cache usage. 1 means 100 percent usage.
# TYPE vllm:gpu_cache_usage_perc gauge
vllm:gpu_cache_usage_perc{model_name="facebook/opt-125m"} 0.0
# HELP vllm:cpu_cache_usage_perc CPU KV-cache usage. 1 means 100 percent usage.
# TYPE vllm:cpu_cache_usage_perc gauge
vllm:cpu_cache_usage_perc{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_preemptions_total Cumulative number of preemption from the engine.
# TYPE vllm:num_preemptions_total counter
vllm:num_preemptions_total{model_name="facebook/opt-125m"} 0.0
# HELP vllm:prompt_tokens_total Number of prefill tokens processed.
# TYPE vllm:prompt_tokens_total counter
vllm:prompt_tokens_total{model_name="facebook/opt-125m"} 5.0
# HELP vllm:generation_tokens_total Number of generation tokens processed.
# TYPE vllm:generation_tokens_total counter
vllm:generation_tokens_total{model_name="facebook/opt-125m"} 100.0
# HELP vllm:time_to_first_token_seconds Histogram of time to first token in seconds.
# TYPE vllm:time_to_first_token_seconds histogram
vllm:time_to_first_token_seconds_bucket{le="0.001",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.005",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.01",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.02",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.04",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.06",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.08",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.1",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.25",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.75",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="7.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_count{model_name="facebook/opt-125m"} 1.0
**_vllm:time_to_first_token_seconds_sum{model_name="facebook/opt-125m"} 9.322166442871094e-05_**
# HELP vllm:time_per_output_token_seconds Histogram of time per output token in seconds.
# TYPE vllm:time_per_output_token_seconds histogram
vllm:time_per_output_token_seconds_bucket{le="0.01",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.025",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.05",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.075",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.1",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.15",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.2",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.3",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.4",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.5",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.75",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_count{model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_sum{model_name="facebook/opt-125m"} 0.007464408874511719
# HELP vllm:e2e_request_latency_seconds Histogram of end to end request latency in seconds.
# TYPE vllm:e2e_request_latency_seconds histogram
vllm:e2e_request_latency_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="15.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="30.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="40.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="50.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="60.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_count{model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_sum{model_name="facebook/opt-125m"} 0.5373260974884033
# HELP vllm:request_prompt_tokens Number of prefill tokens processed.
# TYPE vllm:request_prompt_tokens histogram
vllm:request_prompt_tokens_bucket{le="1.0",model_name="facebook/opt-125m"} 0.0
vllm:request_prompt_tokens_bucket{le="2.0",model_name="facebook/opt-125m"} 0.0
vllm:request_prompt_tokens_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="50.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="100.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="200.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="500.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="1000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="2000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_count{model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_sum{model_name="facebook/opt-125m"} 5.0
# HELP vllm:request_generation_tokens Number of generation tokens processed.
# TYPE vllm:request_generation_tokens histogram
vllm:request_generation_tokens_bucket{le="1.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="2.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="5.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="10.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="20.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="50.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="100.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="200.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="500.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="1000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="2000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_count{model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_sum{model_name="facebook/opt-125m"} 100.0
# HELP vllm:request_params_best_of Histogram of the best_of request parameter.
# TYPE vllm:request_params_best_of histogram
vllm:request_params_best_of_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="2.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_count{model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_sum{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_params_n Histogram of the n request parameter.
# TYPE vllm:request_params_n histogram
vllm:request_params_n_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="2.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_count{model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_sum{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_success_total Count of successfully processed requests.
# TYPE vllm:request_success_total counter
vllm:request_success_total{finished_reason="length",model_name="facebook/opt-125m"} 1.0
# HELP vllm:spec_decode_draft_acceptance_rate Speulative token acceptance rate.
# TYPE vllm:spec_decode_draft_acceptance_rate gauge
# HELP vllm:spec_decode_efficiency Speculative decoding system efficiency.
# TYPE vllm:spec_decode_efficiency gauge
# HELP vllm:spec_decode_num_accepted_tokens_total Number of accepted tokens.
# TYPE vllm:spec_decode_num_accepted_tokens_total counter
# HELP vllm:spec_decode_num_draft_tokens_total Number of draft tokens.
# TYPE vllm:spec_decode_num_draft_tokens_total counter
# HELP vllm:spec_decode_num_emitted_tokens_total Number of emitted tokens.
# TYPE vllm:spec_decode_num_emitted_tokens_total counter
# HELP vllm:avg_prompt_throughput_toks_per_s Average prefill throughput in tokens/s.
# TYPE vllm:avg_prompt_throughput_toks_per_s gauge
vllm:avg_prompt_throughput_toks_per_s{model_name="facebook/opt-125m"} 0.0
# HELP vllm:avg_generation_throughput_toks_per_s Average generation throughput in tokens/s.
# TYPE vllm:avg_generation_throughput_toks_per_s gauge
vllm:avg_generation_throughput_toks_per_s{model_name="facebook/opt-125m"} 19.393803125632054

Now, same process with vllm version 0.6.1.post2 gives these metrics:

INFO:     127.0.0.1:59768 - "GET /metrics HTTP/1.1" 200 OK
# HELP vllm:cache_config_info Information of the LLMEngine CacheConfig
# TYPE vllm:cache_config_info gauge
vllm:cache_config_info{block_size="16",cache_dtype="auto",cpu_offload_gb="0",enable_prefix_caching="False",gpu_memory_utilization="0.9",num_cpu_blocks="7281",num_gpu_blocks="76199",num_gpu_blocks_override="None",sliding_window="None",swap_space_bytes="4294967296"} 1.0
# HELP vllm:num_requests_running Number of requests currently running on GPU.
# TYPE vllm:num_requests_running gauge
vllm:num_requests_running{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_requests_swapped Number of requests swapped to CPU.
# TYPE vllm:num_requests_swapped gauge
vllm:num_requests_swapped{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_requests_waiting Number of requests waiting to be processed.
# TYPE vllm:num_requests_waiting gauge
vllm:num_requests_waiting{model_name="facebook/opt-125m"} 0.0
# HELP vllm:gpu_cache_usage_perc GPU KV-cache usage. 1 means 100 percent usage.
# TYPE vllm:gpu_cache_usage_perc gauge
vllm:gpu_cache_usage_perc{model_name="facebook/opt-125m"} 0.0
# HELP vllm:cpu_cache_usage_perc CPU KV-cache usage. 1 means 100 percent usage.
# TYPE vllm:cpu_cache_usage_perc gauge
vllm:cpu_cache_usage_perc{model_name="facebook/opt-125m"} 0.0
# HELP vllm:cpu_prefix_cache_hit_rate CPU prefix cache block hit rate.
# TYPE vllm:cpu_prefix_cache_hit_rate gauge
vllm:cpu_prefix_cache_hit_rate{model_name="facebook/opt-125m"} -1.0
# HELP vllm:gpu_prefix_cache_hit_rate GPU prefix cache block hit rate.
# TYPE vllm:gpu_prefix_cache_hit_rate gauge
vllm:gpu_prefix_cache_hit_rate{model_name="facebook/opt-125m"} -1.0
# HELP vllm:avg_prompt_throughput_toks_per_s Average prefill throughput in tokens/s.
# TYPE vllm:avg_prompt_throughput_toks_per_s gauge
vllm:avg_prompt_throughput_toks_per_s{model_name="facebook/opt-125m"} 0.49972806469562975
# HELP vllm:avg_generation_throughput_toks_per_s Average generation throughput in tokens/s.
# TYPE vllm:avg_generation_throughput_toks_per_s gauge
vllm:avg_generation_throughput_toks_per_s{model_name="facebook/opt-125m"} 9.994561293912595
# HELP vllm:num_preemptions_total Cumulative number of preemption from the engine.
# TYPE vllm:num_preemptions_total counter
vllm:num_preemptions_total{model_name="facebook/opt-125m"} 0.0
# HELP vllm:prompt_tokens_total Number of prefill tokens processed.
# TYPE vllm:prompt_tokens_total counter
vllm:prompt_tokens_total{model_name="facebook/opt-125m"} 5.0
# HELP vllm:generation_tokens_total Number of generation tokens processed.
# TYPE vllm:generation_tokens_total counter
vllm:generation_tokens_total{model_name="facebook/opt-125m"} 100.0
# HELP vllm:request_success_total Count of successfully processed requests.
# TYPE vllm:request_success_total counter
vllm:request_success_total{finished_reason="length",model_name="facebook/opt-125m"} 1.0
# HELP vllm:time_to_first_token_seconds Histogram of time to first token in seconds.
# TYPE vllm:time_to_first_token_seconds histogram
**_vllm:time_to_first_token_seconds_sum{model_name="facebook/opt-125m"} 0.034735918045043945_**
vllm:time_to_first_token_seconds_bucket{le="0.001",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.005",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.01",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.02",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.04",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.06",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.08",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.1",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.25",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.75",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="7.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_count{model_name="facebook/opt-125m"} 1.0
# HELP vllm:time_per_output_token_seconds Histogram of time per output token in seconds.
# TYPE vllm:time_per_output_token_seconds histogram
vllm:time_per_output_token_seconds_sum{model_name="facebook/opt-125m"} 0.2741813659667969
vllm:time_per_output_token_seconds_bucket{le="0.01",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.025",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.05",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.075",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.1",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.15",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.2",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.3",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.4",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.5",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.75",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_count{model_name="facebook/opt-125m"} 99.0
# HELP vllm:e2e_request_latency_seconds Histogram of end to end request latency in seconds.
# TYPE vllm:e2e_request_latency_seconds histogram
vllm:e2e_request_latency_seconds_sum{model_name="facebook/opt-125m"} 0.3089172840118408
vllm:e2e_request_latency_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="15.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="30.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="40.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="50.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="60.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_count{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_prompt_tokens Number of prefill tokens processed.
# TYPE vllm:request_prompt_tokens histogram
vllm:request_prompt_tokens_sum{model_name="facebook/opt-125m"} 5.0
vllm:request_prompt_tokens_bucket{le="1.0",model_name="facebook/opt-125m"} 0.0
vllm:request_prompt_tokens_bucket{le="2.0",model_name="facebook/opt-125m"} 0.0
vllm:request_prompt_tokens_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="50.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="100.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="200.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="500.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="1000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="2000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_count{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_generation_tokens Number of generation tokens processed.
# TYPE vllm:request_generation_tokens histogram
vllm:request_generation_tokens_sum{model_name="facebook/opt-125m"} 100.0
vllm:request_generation_tokens_bucket{le="1.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="2.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="5.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="10.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="20.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="50.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="100.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="200.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="500.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="1000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="2000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_count{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_params_n Histogram of the n request parameter.
# TYPE vllm:request_params_n histogram
vllm:request_params_n_sum{model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="2.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_count{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_params_best_of Histogram of the best_of request parameter.
# TYPE vllm:request_params_best_of histogram
vllm:request_params_best_of_sum{model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="2.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_count{model_name="facebook/opt-125m"} 1.0

Slowdown is quite significant in my understanding at least according to time_to_first_token_sum, i.e. 9.322166442871094e-05 vs 0.034735918045043945. Any recommendation on this?

[Edit 1] To send a request I used curl:

curl -X POST http://localhost:8000/v1/completions \
     -H "Content-Type: application/json" \
     -d '{
           "model": "facebook/opt-125m",
           "prompt": "Your input text here",
           "max_tokens": 100
         }'

and to query metrics:

curl http://127.0.0.1:8000/metrics

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@oandreeva-nv oandreeva-nv added the bug Something isn't working label Sep 25, 2024
@Imss27
Copy link
Contributor

Imss27 commented Sep 29, 2024

Hey @oandreeva-nv! This looks strange.
One reason that I guess could be a change in the computation definition of such metrics. Or it is simply a bug of previous version.
For your v0.5.3.post1 vLLM,

TTFT_sum + TPOT_sum = 9.322166442871094e-05 + 0.007464408874511719 ≈ 0.0075576 sec
E2E_latency_sum = 0.5373260974884033 sec > the value above

However, for your 0.6.1.post2 vLLM,

TTFT_sum + TPOT_sum = 0.034735918045043945 + 0.2741813659667969 ≈ 0.3089163 sec
E2E_latency_sum = 0.3089172840118408 sec ≈ the value above

I would suggest to try again with v0.5.3.post1 vLLM to see if it is reproducible. And IMO, I think the newer or latest version provides a strict metric data. The old version probably contains bugs.

Hope it helps.😎

@oandreeva-nv
Copy link
Author

Thanks @Imss27 ! Yes, it seems like it was 0.5.3.post1 issue: #6686

@elfiegg
Copy link
Contributor

elfiegg commented Sep 30, 2024

We bisected the codebase and found that version 0.5.3 had an inaccurate metric calculation. And that's why after bug is fixed, you observed a significant increase in metric time.

This is the "culprit":

40468b13faa1ebde366e7002c5752b59e1368d10 is the first bad commit
commit 40468b13faa1ebde366e7002c5752b59e1368d10
Author: Allen.Dou <allen.dou@hotmail.com>
Date:   Wed Jul 24 23:58:42 2024 +0800

    [Bugfix] Miscalculated latency lead to time_to_first_token_seconds inaccurate. (#6686)

 vllm/engine/llm_engine.py              | 3 ++-
 vllm/spec_decode/spec_decode_worker.py | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

It seems there is a related bug, please see: #6337.

We also confirmed that the results from version 0.4.2 were similar to those after version 0.5.4. Could you please help verify if this behavior aligns with your expectations? @oandreeva-nv

{"id":"cmpl-af7ae7fc22d44b4c85ab9d12fdd8a5f2","object":"text_completion","created":1727724243,"model":"facebook/opt-125m","choices":[{"index":0,"text":" is intended for reference purposes only and will not be endorsed by or endorsed by Cadillac Keyboard Group Office. Cadillac Keyboard Group Office will use your personal information for making and understanding Rights Motion signals reasonably that you agree with the Privacy Policy or you will not be able to use this data for any other purpose. See the Privacy Policy and Terms of Use for more restrictions on what personal information Cadillac Keyboard Group Office may collect when using this site and provide your information about the decisions you may not otherwise have seenunderscored","logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":5,"total_tokens":105,"completion_tokens":100}}# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 10627.0
python_gc_objects_collected_total{generation="1"} 5824.0
python_gc_objects_collected_total{generation="2"} 617.0
# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 2205.0
python_gc_collections_total{generation="1"} 199.0
python_gc_collections_total{generation="2"} 82.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="10",patchlevel="12",version="3.10.12"} 1.0
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.10304919552e+011
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 9.007546368e+09
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.72772412262e+09
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 31.57
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 90.0
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP vllm:cache_config_info information of cache_config
# TYPE vllm:cache_config_info gauge
vllm:cache_config_info{block_size="16",cache_dtype="auto",enable_prefix_caching="False",gpu_memory_utilization="0.9",num_cpu_blocks="7281",num_gpu_blocks="127618",num_gpu_blocks_override="None",sliding_window="None",swap_space_bytes="4294967296"} 1.0
# HELP vllm:num_requests_running Number of requests currently running on GPU.
# TYPE vllm:num_requests_running gauge
vllm:num_requests_running{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_requests_waiting Number of requests waiting to be processed.
# TYPE vllm:num_requests_waiting gauge
vllm:num_requests_waiting{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_requests_swapped Number of requests swapped to CPU.
# TYPE vllm:num_requests_swapped gauge
vllm:num_requests_swapped{model_name="facebook/opt-125m"} 0.0
# HELP vllm:gpu_cache_usage_perc GPU KV-cache usage. 1 means 100 percent usage.
# TYPE vllm:gpu_cache_usage_perc gauge
vllm:gpu_cache_usage_perc{model_name="facebook/opt-125m"} 0.0
# HELP vllm:cpu_cache_usage_perc CPU KV-cache usage. 1 means 100 percent usage.
# TYPE vllm:cpu_cache_usage_perc gauge
vllm:cpu_cache_usage_perc{model_name="facebook/opt-125m"} 0.0
# HELP vllm:num_preemptions_total Cumulative number of preemption from the engine.
# TYPE vllm:num_preemptions_total counter
vllm:num_preemptions_total{model_name="facebook/opt-125m"} 0.0
# HELP vllm:prompt_tokens_total Number of prefill tokens processed.
# TYPE vllm:prompt_tokens_total counter
vllm:prompt_tokens_total{model_name="facebook/opt-125m"} 5.0
# HELP vllm:generation_tokens_total Number of generation tokens processed.
# TYPE vllm:generation_tokens_total counter
vllm:generation_tokens_total{model_name="facebook/opt-125m"} 100.0
# HELP vllm:time_to_first_token_seconds Histogram of time to first token in seconds.
# TYPE vllm:time_to_first_token_seconds histogram
vllm:time_to_first_token_seconds_bucket{le="0.001",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.005",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.01",model_name="facebook/opt-125m"} 0.0
vllm:time_to_first_token_seconds_bucket{le="0.02",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.04",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.06",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.08",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.1",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.25",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="0.75",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="7.5",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_count{model_name="facebook/opt-125m"} 1.0
vllm:time_to_first_token_seconds_sum{model_name="facebook/opt-125m"} 0.010315418243408203
# HELP vllm:time_per_output_token_seconds Histogram of time per output token in seconds.
# TYPE vllm:time_per_output_token_seconds histogram
vllm:time_per_output_token_seconds_bucket{le="0.01",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.025",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.05",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.075",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.1",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.15",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.2",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.3",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.4",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.5",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="0.75",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_count{model_name="facebook/opt-125m"} 99.0
vllm:time_per_output_token_seconds_sum{model_name="facebook/opt-125m"} 0.20091915130615234
# HELP vllm:e2e_request_latency_seconds Histogram of end to end request latency in seconds.
# TYPE vllm:e2e_request_latency_seconds histogram
vllm:e2e_request_latency_seconds_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="2.5",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="15.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="30.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="40.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="50.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="60.0",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_count{model_name="facebook/opt-125m"} 1.0
vllm:e2e_request_latency_seconds_sum{model_name="facebook/opt-125m"} 0.21123456954956055
# HELP vllm:request_prompt_tokens Number of prefill tokens processed.
# TYPE vllm:request_prompt_tokens histogram
vllm:request_prompt_tokens_bucket{le="1.0",model_name="facebook/opt-125m"} 0.0
vllm:request_prompt_tokens_bucket{le="2.0",model_name="facebook/opt-125m"} 0.0
vllm:request_prompt_tokens_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="50.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="100.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="200.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="500.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="1000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="2000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_count{model_name="facebook/opt-125m"} 1.0
vllm:request_prompt_tokens_sum{model_name="facebook/opt-125m"} 5.0
# HELP vllm:request_generation_tokens Number of generation tokens processed.
# TYPE vllm:request_generation_tokens histogram
vllm:request_generation_tokens_bucket{le="1.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="2.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="5.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="10.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="20.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="50.0",model_name="facebook/opt-125m"} 0.0
vllm:request_generation_tokens_bucket{le="100.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="200.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="500.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="1000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="2000.0",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_count{model_name="facebook/opt-125m"} 1.0
vllm:request_generation_tokens_sum{model_name="facebook/opt-125m"} 100.0
# HELP vllm:request_params_best_of Histogram of the best_of request parameter.
# TYPE vllm:request_params_best_of histogram
vllm:request_params_best_of_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="2.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_count{model_name="facebook/opt-125m"} 1.0
vllm:request_params_best_of_sum{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_params_n Histogram of the n request parameter.
# TYPE vllm:request_params_n histogram
vllm:request_params_n_bucket{le="1.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="2.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="5.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="10.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="20.0",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_bucket{le="+Inf",model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_count{model_name="facebook/opt-125m"} 1.0
vllm:request_params_n_sum{model_name="facebook/opt-125m"} 1.0
# HELP vllm:request_success_total Count of successfully processed requests.
# TYPE vllm:request_success_total counter
vllm:request_success_total{finished_reason="length",model_name="facebook/opt-125m"} 1.0
# HELP vllm:avg_prompt_throughput_toks_per_s Average prefill throughput in tokens/s.
# TYPE vllm:avg_prompt_throughput_toks_per_s gauge
vllm:avg_prompt_throughput_toks_per_s{model_name="facebook/opt-125m"} 0.5493724274429073
# HELP vllm:avg_generation_throughput_toks_per_s Average generation throughput in tokens/s.
# TYPE vllm:avg_generation_throughput_toks_per_s gauge
vllm:avg_generation_throughput_toks_per_s{model_name="facebook/opt-125m"} 0.10987448548858145

@oandreeva-nv
Copy link
Author

Thanks @elfiegg for finding this, yes, I believe I'm all set now and I've closed the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants