Skip to content
#

triton-inference-server

Here are 11 public repositories matching this topic...

Deep Learning Deployment Framework: Supports tf/torch/trt/trtllm/vllm and other NN frameworks. Support dynamic batching, and streaming modes. It is dual-language compatible with Python and C++, offering scalability, extensibility, and high performance. It helps users quickly deploy models and provide services through HTTP/RPC interfaces.

  • Updated Mar 25, 2025
  • C++

A high-performance multi-object tracking system utilizing a quantized YOLOv11 model deployed on the Triton Inference Server, integrated with a CUDA-accelerated particle filter for robust tracking mutiple objects.

  • Updated Dec 30, 2024
  • C++

Improve this page

Add a description, image, and links to the triton-inference-server topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the triton-inference-server topic, visit your repo's landing page and select "manage topics."

Learn more