Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference after converting from PyTorch to Onnx to TensorRT #614

Closed
kartik1395 opened this issue Feb 16, 2021 · 11 comments · Fixed by #758
Closed

Inference after converting from PyTorch to Onnx to TensorRT #614

kartik1395 opened this issue Feb 16, 2021 · 11 comments · Fixed by #758
Assignees
Labels
help wanted Extra attention is needed TensorRT

Comments

@kartik1395
Copy link

I've converted the PyTorch TSN model to Onnx and I'm planning to convert it to TensorRT. I wanted to know how can I make a prediction for a video input using the TensorRT model. Do I need to make any changes in the Prediction.py file ?

@innerlee
Copy link
Contributor

Sorry which Prediction.py?

@kartik1395
Copy link
Author

Sorry which Prediction.py?

tools/prediction.py

@kartik1395
Copy link
Author

Sorry which Prediction.py?

Sorry I used https://github.com/open-mmlab/mmaction2/blob/master/docs/getting_started.md#high-level-apis-for-testing-a-video-and-rawframes to make my own prediction.py.

Nonetheless, could you guide on how to make predictions using TensorRT using similar Hight Level API ?

@innerlee innerlee added the enhancement New feature or request label Feb 16, 2021
@innerlee
Copy link
Contributor

@kartik1395
Copy link
Author

I see this is for a single image input if I'm not mistaken. How can it be replicated for a video file ? Sorry I'm a bit confused.

@innerlee
Copy link
Contributor

Ignore the variable name one_img, it was copied from elsewhere. Input shape is determined by input_shape.

@kartik1395
Copy link
Author

I understand. I've been using the high level API for inference that passes video file name as a parameter for making a prediction that prepares data using inference.py file https://github.com/open-mmlab/mmaction2/blob/master/mmaction/apis/inference.py.

Want to know how I can similarly pass the whole video through the TensorRT model.

@innerlee
Copy link
Contributor

innerlee commented Feb 16, 2021

@dreamerlin maybe we can add support for this in inference.py? Like, adding a inference_recognizer_trt() function?

edit: trt

@innerlee innerlee added help wanted Extra attention is needed TensorRT and removed enhancement New feature or request labels Feb 17, 2021
@kartik1395
Copy link
Author

@dreamerlin maybe we can add support for this in inference.py? Like, adding a inference_recognizer_trt() function?

edit: trt

That would be really helpful, thank you!

@irvingzhang0512
Copy link
Contributor

I plan to implement this in two months along with inference_recoginizer_onnx. May also implement a tool to test inference speed.

@dreamerlin
Copy link
Collaborator

Ref: open-mmlab/mmpretrain#153

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed TensorRT
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants