Real-time pose estimation accelerated with NVIDIA TensorRT
-
Updated
Aug 12, 2022 - Python
Real-time pose estimation accelerated with NVIDIA TensorRT
Boosting DL Service Throughput 1.5-4x by Ensemble Pipeline Serving with Concurrent CUDA Streams for PyTorch/LibTorch Frontend and TensorRT/CVCUDA, etc., Backends
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
Jetson Nano Setup without Monitor for JetBot Build. JupyterLab, ROS2 Dasing, Torch, Torch2trt, ONNX, ONNXRuntime-GPU and TensorFlow Installation Included. JupyterLab doesn't require Docker Container.
Add a description, image, and links to the torch2trt topic page so that developers can more easily learn about it.
To associate your repository with the torch2trt topic, visit your repo's landing page and select "manage topics."