Skip to content

Latest commit

 

History

History
118 lines (95 loc) · 6.96 KB

trtllm_user_guide.md

File metadata and controls

118 lines (95 loc) · 6.96 KB

TensorRT-LLM User Guide

What is TensorRT-LLM

TensorRT-LLM (TRT-LLM) is an open-source library designed to accelerate and optimize the inference performance of large language models (LLMs) on NVIDIA GPUs. TRT-LLM offers users an easy-to-use Python API to build TensorRT engines for LLMs, incorporating state-of-the-art optimizations to ensure efficient inference on NVIDIA GPUs.

How to run TRT-LLM models with Triton Server via TensorRT-LLM backend

The TensorRT-LLM Backend lets you serve TensorRT-LLM models with Triton Inference Server. Check out the Getting Started section in the TensorRT-LLM Backend repo to learn how to utlize the NGC Triton TRT-LLM container to prepare engines for your LLM models and serve them with Triton.

How to use your custom TRT-LLM model

All the supported models can be found in the examples folder in the TRT-LLM repo. Follow the examples to convert your models to TensorRT engines.

After the engine is built, prepare the model repository for Triton, and modify the model configuration.

Only the mandatory parameters need to be set in the model config file. Feel free to modify the optional parameters as needed. To learn more about the parameters, model inputs, and outputs, see the model config documentation for more details.

Advanced Configuration Options and Deployment Strategies

Explore advanced configuration options and deployment strategies to optimize and run Triton with your TRT-LLM models effectively:

  • Model Deployment: Techniques for efficiently deploying and managing your models in various environments.
  • Multi-Instance GPU (MIG) Support: Run Triton and TRT-LLM models with MIG to optimize GPU resource management.
  • Scheduling: Configure scheduling policies to control how requests are managed and executed.
  • Key-Value Cache: Utlizte KV cache and KV cache reuse to optimize memory usage and improve performance.
  • Decoding: Advanced methods for generating text, including top-k, top-p, top-k top-p, beam search, Medusa, and speculative decoding.
  • Chunked Context: Splitting the context into several chunks and batching them during generation phase to increase overall throughput.
  • Quantization: Apply quantization techniques to reduce model size and enhance inference speed.
  • LoRa (Low-Rank Adaptation): Use LoRa for efficient model fine-tuning and adaptation.

Tutorials

Make sure to check out the tutorials repo to see more guides on serving popular LLM models with Triton Server and TensorRT-LLM, as well as deploying them on Kubernetes.

Benchmark

GenAI-Perf is a command line tool for measuring the throughput and latency of LLMs served by Triton Inference Server. Check out the Quick Start to learn how to use GenAI-Perf to benchmark your LLM models.

Performance Best Practices

Check out the Performance Best Practices guide to learn how to optimize your TensorRT-LLM models for better performance.

Metrics

Triton Server provides metrics indicating GPU and request statistics. See the Triton Metrics section in the TensorRT-LLM Backend repo to learn how to query the Triton metrics endpoint to obtain TRT-LLM statistics.

Ask questions or report issues

Can't find what you're looking for, or have a question or issue? Feel free to ask questions or report issues in the GitHub issues page: