Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Encoding][Codegen] Add initial pad encoding layout attrs #19865

Merged
merged 2 commits into from
Feb 3, 2025

Conversation

kuhar
Copy link
Member

@kuhar kuhar commented Jan 31, 2025

These allow us to pad allocations without changing the logical tensor sizes or data layouts.

Split the encoding layout attribute interface into two:

  • One with target-specific information that allows us to decide layouts.
  • One with serialized target-agnostic padding information.

@kuhar kuhar marked this pull request as draft January 31, 2025 07:30
@kuhar kuhar force-pushed the pad-encoding-attrs-only branch from ad3c070 to f66bba0 Compare January 31, 2025 22:26
@kuhar kuhar changed the title [DNS][Codegen] Add initial pad encoding layout attrs [Codegen] Add initial pad encoding layout attrs Jan 31, 2025
@kuhar kuhar marked this pull request as ready for review January 31, 2025 22:29
Copy link
Contributor

@hanhanW hanhanW left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, happy to see that it in a better organization!

Comment on lines +159 to +169
if (ArrayAttr layoutsAttr = getLayouts()) {
if (!llvm::all_of(layoutsAttr.getValue(),
llvm::IsaPred<SerializedEncodingLayoutAttrInterface>)) {
return nullptr;
}

auto layoutsAttrArray =
llvm::to_vector_of<IREE::Encoding::EncodingLayoutAttrInterface>(
layoutsAttr.getValue());
Value res;
for (auto attr : layoutsAttrArray) {
for (auto attr :
layoutsAttr.getAsRange<SerializedEncodingLayoutAttrInterface>()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice cleanup, thanks!

Comment on lines +37 to +40
#include <cassert>
#include <cfloat>
#include <cstdint>
#include <numeric>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember which include style we are following. LLVM puts the system headers at the end, while Google puts them in the middle. IREE is following the mixed styles. It looks okay to me because our Codegen codebase is very close to MLIR codebase.

@benvanik @MaheshRavishankar which style should we follow?

https://google.github.io/styleguide/cppguide.html#Names_and_Order_of_Includes
https://llvm.org/docs/CodingStandards.html#include-style

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should write it down somewhere on iree.dev. I remember the include order is something that we explicitly tweaked when switching to the llvm clang-format style: https://github.com/iree-org/iree/blob/main/compiler/.clang-format

@kuhar
Copy link
Member Author

kuhar commented Feb 3, 2025

I need to rebase this on top of #19879

These allow us to pad allocations without changing the logical tensor
sizes or data layouts.

Split the encoding layout attribute interface into two:
* One with target-specific information that allows us to decide layouts.
* One with serialized target-agnostic padding information.

Signed-off-by: Jakub Kuderski <jakub@nod-labs.com>
@kuhar kuhar force-pushed the pad-encoding-attrs-only branch from 91a09e7 to b7af694 Compare February 3, 2025 19:13
@kuhar kuhar changed the title [Codegen] Add initial pad encoding layout attrs [Encoding][Codegen] Add initial pad encoding layout attrs Feb 3, 2025
@kuhar kuhar enabled auto-merge (squash) February 3, 2025 19:19
Signed-off-by: Jakub Kuderski <jakub@nod-labs.com>
@kuhar kuhar merged commit 86244de into iree-org:main Feb 3, 2025
41 of 42 checks passed
ita9naiwa pushed a commit to ita9naiwa/iree that referenced this pull request Feb 4, 2025
…9865)

These allow us to pad allocations without changing the logical tensor
sizes or data layouts.

Split the encoding layout attribute interface into two:
* One with target-specific information that allows us to decide layouts.
* One with serialized target-agnostic padding information.

Signed-off-by: Jakub Kuderski <jakub@nod-labs.com>
Signed-off-by: Hyunsung Lee <ita9naiwa@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants