Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Automatically generate QDP plugins #3370

Merged
merged 1 commit into from
Feb 25, 2025
Merged

feat: Automatically generate QDP plugins #3370

merged 1 commit into from
Feb 25, 2025

Conversation

bowang007
Copy link
Collaborator

Description

This PR introduces a new feature which enables generating automatic plugin generation using TensorRT QDP feature.

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@bowang007 bowang007 requested a review from narendasan January 29, 2025 23:35
@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Jan 29, 2025
@github-actions github-actions bot requested a review from peri044 January 29, 2025 23:36
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

outdated

github-actions[bot]

This comment was marked as outdated.

github-actions[bot]

This comment was marked as outdated.


# Use the helper function to get the required signatures
args_input, kwargs_input, plugin_signature, plugin_impl_signature, register_func_annotation, impl_func_annotation = generate_signature(torch_op)
print(args_input)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this debug info

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@narendasan when I use:

    _LOGGER.debug(f"Plugin registration function: \n{codegen_plugin}")

It won't print anything. How to resolve this?

github-actions[bot]

This comment was marked as outdated.

@github-actions github-actions bot added the component: build system Issues re: Build system label Feb 7, 2025
github-actions[bot]

This comment was marked as outdated.

Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Figure out if QDP will hold the reference to the plugin for us
  2. Add test cases for 1. (Tensor, Tensor) -> (Tensor) 2. (Tensor, int, float) -> (Tensor) 3. (Tensor, Tensor) -> (Tensor, Tensor)

@narendasan
Copy link
Collaborator

Rebase as well

@github-actions github-actions bot added the component: tests Issues re: Tests label Feb 20, 2025
github-actions[bot]

This comment was marked as outdated.

github-actions[bot]

This comment was marked as outdated.

capability_validator: Optional[Callable[[Node, CompilationSettings], bool]] = None,
priority: ConverterPriority = ConverterPriority.STANDARD,
supports_dynamic_shapes: bool = False,
):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a docstring as this is a user API

)


def generate_plugin(plugin_name: str):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a docstring

Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM after docstrings are added

github-actions[bot]

This comment was marked as outdated.

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added minor comments. LGTM


for tensor_arg in tensor_args:

sample = {f"{i}": 5 for i in range(tensor_arg.ndim)}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is 5 here ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's a default value or something, consider storing it in a global variable to make it more clear ?

outputs: Tuple[trtp.Tensor], stream: int, *args: Any, **kwargs: Any
) -> None:
tensor_args = [elem for elem in args if isinstance(elem, trtp.Tensor)]
print(args)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary ? If so, can you make this message more effective for users ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like I forgot to delete the debugging lines. Thanks!

Copy link
Collaborator

@zewenli98 zewenli98 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some minor comments

Comment on lines 174 to 175
tensor_args = [elem for elem in args if isinstance(elem, trtp.Tensor)]
print(args)
non_tensor_args = [elem for elem in args if not isinstance(elem, trtp.Tensor)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to go for loops twice?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can write it as:

tensor_args, non_tensor_args = [], []
for elem in args:
    (tensor_args if isinstance(elem, trtp.Tensor) else non_tensor_args).append(elem)

Since args won't be long I think the first one will be easier to understand?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's clearer and faster in this way:

tensor_args, non_tensor_args = [], []
for elem in args:
    if isinstance(elem, trtp.Tensor):
        tensor_args.append(elem)
    else:
        non_tensor_args.append(elem)

@@ -58,13 +57,24 @@ def custom_kernel_converter(
# Assuming TensorRT preserves kwargs order like PyTorch does
non_tensor_inputs = plugin.input_attrs

kwargs = {}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you intend to override kwargs here? If yes, it seems kwargs is not necessary in the arguments (line 46).

Copy link
Collaborator Author

@bowang007 bowang007 Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, this kwargs cannot be erased since it's required in TorchTRT converter here.

@bowang007 bowang007 merged commit 31bdf77 into main Feb 25, 2025
50 of 68 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: build system Issues re: Build system component: conversion Issues re: Conversion stage component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants