-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference after converting from PyTorch to Onnx to TensorRT #614
Comments
Sorry which |
|
Sorry I used https://github.com/open-mmlab/mmaction2/blob/master/docs/getting_started.md#high-level-apis-for-testing-a-video-and-rawframes to make my own Nonetheless, could you guide on how to make predictions using TensorRT using similar Hight Level API ? |
I see this is for a single image input if I'm not mistaken. How can it be replicated for a video file ? Sorry I'm a bit confused. |
Ignore the variable name |
I understand. I've been using the high level API for inference that passes video file name as a parameter for making a prediction that prepares data using Want to know how I can similarly pass the whole video through the TensorRT model. |
@dreamerlin maybe we can add support for this in inference.py? Like, adding a edit: trt |
That would be really helpful, thank you! |
I plan to implement this in two months along with |
I've converted the PyTorch TSN model to Onnx and I'm planning to convert it to TensorRT. I wanted to know how can I make a prediction for a video input using the TensorRT model. Do I need to make any changes in the
Prediction.py
file ?The text was updated successfully, but these errors were encountered: