Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][Linalg]: Optimize linalg generic in transform::PromoteOp to avoid unnecessary copies #68555

Merged
merged 1 commit into from
Oct 14, 2023

Conversation

AviadCo
Copy link
Contributor

@AviadCo AviadCo commented Oct 9, 2023

If the operands are not used in the payload of generic linalg operations, there is no need to copy them before the operation.

@llvmbot
Copy link
Member

llvmbot commented Oct 9, 2023

@llvm/pr-subscribers-mlir-gpu
@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir

Changes

If the operands are not used in the payload of generic linalg operations, there is no need to copy them before the operation.


Full diff: https://github.com/llvm/llvm-project/pull/68555.diff

3 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp (+10)
  • (modified) mlir/test/Dialect/GPU/promotion.mlir (+1)
  • (modified) mlir/test/Dialect/Linalg/promote.mlir (-1)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp b/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
index ad399f57f72cb1b..a131f3097666197 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
@@ -28,6 +28,7 @@
 #include "mlir/Transforms/FoldUtils.h"
 #include "llvm/ADT/MapVector.h"
 #include "llvm/ADT/SmallBitVector.h"
+#include "llvm/ADT/SmallSet.h"
 #include "llvm/ADT/TypeSwitch.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/Debug.h"
@@ -142,6 +143,8 @@ struct LinalgOpInstancePromotionOptions {
                                    const LinalgPromotionOptions &options);
   /// SubViews to promote.
   MapVector<int64_t, Value> subViews;
+  /// Subviews operand numbers to copy in using copyInFn.
+  llvm::SmallSet<int64_t, 4> operandsNumbersToCopyIn;
   /// True if the full view should be used for the promoted buffer.
   DenseMap<Value, bool> useFullTileBuffers;
 
@@ -174,6 +177,11 @@ LinalgOpInstancePromotionOptions::LinalgOpInstancePromotionOptions(
     Operation *op = opOperand.get().getDefiningOp();
     if (auto sv = dyn_cast_or_null<memref::SubViewOp>(op)) {
       subViews[operandNumber] = sv;
+      // In case of linalg generic, copy in only if subview is used in linalg
+      // payload.
+      if (!isa<linalg::GenericOp>(linalgOp) ||
+          linalgOp.payloadUsesValueFromOperand(&opOperand))
+        operandsNumbersToCopyIn.insert(operandNumber);
       useFullTileBuffers[sv] = vUseFullTileBuffers[operandNumber];
     }
   }
@@ -324,6 +332,8 @@ promoteSubViews(ImplicitLocOpBuilder &b,
     auto info = promotionInfoMap.find(v.first);
     if (info == promotionInfoMap.end())
       continue;
+    if (options.operandsNumbersToCopyIn.count(v.first) == 0)
+      continue;
     if (failed(options.copyInFn(
             b, cast<memref::SubViewOp>(v.second.getDefiningOp()),
             info->second.partialLocalView)))
diff --git a/mlir/test/Dialect/GPU/promotion.mlir b/mlir/test/Dialect/GPU/promotion.mlir
index b4668b56788943d..2da1be597753bfd 100644
--- a/mlir/test/Dialect/GPU/promotion.mlir
+++ b/mlir/test/Dialect/GPU/promotion.mlir
@@ -1,3 +1,4 @@
+
 // RUN: mlir-opt -allow-unregistered-dialect -pass-pipeline='builtin.module(gpu.module(gpu.func(test-gpu-memory-promotion)))' -split-input-file %s | FileCheck %s
 
 gpu.module @foo {
diff --git a/mlir/test/Dialect/Linalg/promote.mlir b/mlir/test/Dialect/Linalg/promote.mlir
index 5cd56db7fd2d88c..31b29c0e105d99d 100644
--- a/mlir/test/Dialect/Linalg/promote.mlir
+++ b/mlir/test/Dialect/Linalg/promote.mlir
@@ -353,7 +353,6 @@ func.func @linalg_generic_update_all_function_inputs_outputs(%arg0: memref<3x4xf
   // CHECK:           %[[VAL_62:.*]] = memref.subview %[[VAL_61]][0, 0] {{\[}}%[[VAL_52]], %[[VAL_55]]] [1, 1] : memref<?x?xf32, #gpu.address_space<workgroup>> to memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>
   // CHECK:           memref.copy %[[VAL_3]], %[[VAL_24]] : memref<4x3xf32, strided<[4, 1]>, 1> to memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>
   // CHECK:           memref.copy %[[VAL_4]], %[[VAL_43]] : memref<4x3xf32, strided<[4, 1]>, 1> to memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>
-  // CHECK:           memref.copy %[[VAL_5]], %[[VAL_62]] : memref<4x3xf32, strided<[4, 1]>, 1> to memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>
   // CHECK:           linalg.generic {doc = "", indexing_maps = [#map, #map, #map], iterator_types = ["parallel", "parallel"], library_call = ""} ins(%[[VAL_24]], %[[VAL_43]] : memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>, memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>) outs(%[[VAL_62]] : memref<?x?xf32, strided<[?, 1], offset: ?>, #gpu.address_space<workgroup>>) {
   // CHECK:           ^bb0(%[[VAL_63:.*]]: f32, %[[VAL_64:.*]]: f32, %[[VAL_65:.*]]: f32):
   // CHECK:             %[[VAL_66:.*]] = arith.addf %[[VAL_63]], %[[VAL_64]] : f32

@AviadCo
Copy link
Contributor Author

AviadCo commented Oct 10, 2023

Adding @aniragil @amrami @amirBish as subscribers

@@ -174,6 +177,11 @@ LinalgOpInstancePromotionOptions::LinalgOpInstancePromotionOptions(
Operation *op = opOperand.get().getDefiningOp();
if (auto sv = dyn_cast_or_null<memref::SubViewOp>(op)) {
subViews[operandNumber] = sv;
// In case of linalg generic, copy in only if subview is used in linalg
// payload.
if (!isa<linalg::GenericOp>(linalgOp) ||
Copy link
Contributor

@chelini chelini Oct 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is it tied to a linalg.generic? Can the optimization works for any LinalgOp?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chelini In case of linalg.generic, it is easy to know if an input/output operand are being used by checking if the operand is being used in the payload before except writing to it (which is what I check payloadUsesValueFromOperand).
If case of any LinalgOp, I am not sure it is triavial since some operations doesn't have a payload (i.e. linalg.copy).

One think we can do it to add an interface for each LinalgOp to implement and check if output operand (init) might be used before assignment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

payloadUsesValueFromOperand is an interface method of LinalgStructuredInterface, and the copy operation implements LinalgStructuredInterface too. Looking at LinalgNamedStructuredOps.yamlgen.td (you find it in the build dir) CopyOp "derives" from LinalgStructuredBase_Op, which implements LinalgStructuredInterface, thus payloadUsesValueFromOperand is well defined also for CopyOp too.

Copy link
Contributor Author

@AviadCo AviadCo Oct 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right about CopyOp has payloadUsesValueFromOperand , but for that example the ins operand needs promotation altough payloadUsesValueFromOperand will return false for it since linalg.copy has not payload.
I believe that we want an inferface that for each LinalgOp return if init operands are being used before writen into.

unnecessary copies

If the operands are not used in the payload of generic linalg operations, there is no need to copy them before the operation.
Copy link
Contributor

@chelini chelini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, looks good to me. But let's wait for @nicolasvasilache to chime-in before landing.

Copy link
Contributor

@nicolasvasilache nicolasvasilache left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

works for me but indeed we should make these interfaces work properly on the non-generic ops

@AviadCo
Copy link
Contributor Author

AviadCo commented Oct 14, 2023

@nicolasvasilache I agree. I will merge this optimization for linalg.generic and work on solution to make it it work on other linalg operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants