Skip to content

Commit

Permalink
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Browse files Browse the repository at this point in the history
… replace_opresult_with_value
  • Loading branch information
huangjiyi committed Jan 15, 2024
2 parents 73c620d + f378e96 commit 51f712e
Show file tree
Hide file tree
Showing 259 changed files with 9,933 additions and 1,757 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ paddle/fluid/pir/dialect/operator/ir/op_decomp.cc
paddle/fluid/pir/dialect/operator/ir/pd_op_vjp.cc
paddle/fluid/pir/dialect/operator/ir/pd_op.*
paddle/fluid/pir/dialect/operator/ir/onednn_op.*
paddle/fluid/pir/dialect/operator/ir/pd_onednn_op.*
paddle/fluid/pir/dialect/operator/ir/pd_onednn_op_info.*
paddle/fluid/pir/dialect/operator/ir/pd_op_bwd.*
paddle/fluid/pir/dialect/operator/ir/pd_op_fused.*
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ English | [简体中文](./README_cn.md) | [日本語](./README_ja.md)
Welcome to the PaddlePaddle GitHub.

PaddlePaddle, as the first independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It is an industrial platform with advanced technologies and rich features that cover core deep learning frameworks, basic model libraries, end-to-end development kits, tools & components as well as service platforms.
PaddlePaddle is originated from industrial practices with dedication and commitments to industrialization. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 8 million developers, 220,000 companies and generating 800,000 models. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI.
PaddlePaddle is originated from industrial practices with dedication and commitments to industrialization. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 10.7 million developers, 235,000 companies and generating 860,000 models. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI.

## Installation

Expand Down
2 changes: 1 addition & 1 deletion README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

欢迎来到 PaddlePaddle GitHub

飞桨(PaddlePaddle)以百度多年的深度学习技术研究和业务应用为基础,是中国首个自主研发、功能完备、 开源开放的产业级深度学习平台,集深度学习核心训练和推理框架、基础模型库、端到端开发套件和丰富的工具组件于一体。目前,飞桨累计开发者800万,服务企业22万家,基于飞桨开源深度学习平台产生了80万个模型。飞桨助力开发者快速实现AI想法,快速上线AI业务。帮助越来越多的行业完成AI赋能,实现产业智能化升级。
飞桨(PaddlePaddle)以百度多年的深度学习技术研究和业务应用为基础,是中国首个自主研发、功能完备、 开源开放的产业级深度学习平台,集深度学习核心训练和推理框架、基础模型库、端到端开发套件和丰富的工具组件于一体。目前,飞桨累计开发者1070万,服务企业23.5万家,基于飞桨开源深度学习平台产生了86万个模型。飞桨助力开发者快速实现AI想法,快速上线AI业务。帮助越来越多的行业完成AI赋能,实现产业智能化升级。

## 安装

Expand Down
2 changes: 1 addition & 1 deletion README_ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
PaddlePaddle GitHub へようこそ。

PaddlePaddle は中国初の独立系 R&D ディープラーニングプラットフォームとして、2016年からプロのコミュニティに正式にオープンソース化されました。コアとなる深層学習フレームワーク、基本モデルライブラリ、エンドツーエンドの開発キット、ツール&コンポーネント、さらにサービスプラットフォームを網羅する、高度な技術と豊富な機能を備えた産業プラットフォームです。
PaddlePaddle は、工業化に対するコミットメントを持つ工業的実践から生まれたものです。製造業、農業、企業サービスなど幅広い分野で採用され、800万人以上の開発者、22万以上の企業、80万以上のモデルを生み出しています。それにより PaddlePaddle は、ますます多くのパートナーの AI 商用化を支援しています。
PaddlePaddle は、工業化に対するコミットメントを持つ工業的実践から生まれたものです。製造業、農業、企業サービスなど幅広い分野で採用され、1070万人以上の開発者、23.5万以上の企業、86万以上のモデルを生み出しています。それにより PaddlePaddle は、ますます多くのパートナーの AI 商用化を支援しています。

## インストール

Expand Down
2 changes: 0 additions & 2 deletions cmake/FindNumPy.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
# NUMPY_FOUND
# will be set by this script

cmake_minimum_required(VERSION 2.6)

if(NOT PYTHON_EXECUTABLE)
if(NumPy_FIND_QUIETLY)
find_package(PythonInterp QUIET)
Expand Down
2 changes: 1 addition & 1 deletion cmake/external/eigen.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ if(CMAKE_COMPILER_IS_GNUCC)
list(GET GCC_VERSION_COMPONENTS 0 GCC_MAJOR)
list(GET GCC_VERSION_COMPONENTS 1 GCC_MINOR)
set(GCC_VERSION "${GCC_MAJOR}.${GCC_MINOR}")
if(GCC_VERSION GREATER_EQUAL "12.0")
if(GCC_VERSION GREATER_EQUAL 12.0)
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/eigen/Complex.h.patch
complex_header)
set(EIGEN_PATCH_COMMAND
Expand Down
8 changes: 4 additions & 4 deletions paddle/cinn/common/arithmatic_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,16 @@ using namespace ir; // NOLINT

TEST(GiNaC, simplify) {
using namespace GiNaC; // NOLINT
symbol x("x");
symbol y("y");
GiNaC::symbol x("x");
GiNaC::symbol y("y");

ex e = x * 0 + 1 + 2 + 3 - 100 + 30 * y - y * 21 + 0 * x;
LOG(INFO) << "e: " << e;
}

TEST(GiNaC, diff) {
using namespace GiNaC; // NOLINT
symbol x("x"), y("y");
GiNaC::symbol x("x"), y("y");
ex e = (x + 1);
ex e1 = (y + 1);

Expand All @@ -54,7 +54,7 @@ TEST(GiNaC, diff) {

TEST(GiNaC, solve) {
using namespace GiNaC; // NOLINT
symbol x("x"), y("y");
GiNaC::symbol x("x"), y("y");

lst eqns{2 * x + 3 == 19};
lst vars{x};
Expand Down
52 changes: 51 additions & 1 deletion paddle/cinn/common/broadcast_tree.cc
Original file line number Diff line number Diff line change
Expand Up @@ -185,8 +185,8 @@ using Pattern2Placement = std::unordered_map<symbol::DimExpr, symbol::DimExpr>;
Pattern2Placement ConstructCstrLhsEqRhsReplacement(
const symbol::Broadcastable<symbol::DimExpr>& broadcastable_condition) {
auto [lhs, rhs] = *broadcastable_condition;
if (lhs.isa<std::string>()) return Pattern2Placement{{lhs, rhs}};
if (rhs.isa<std::string>()) return Pattern2Placement{{rhs, lhs}};
if (lhs.isa<std::string>()) return Pattern2Placement{{lhs, rhs}};
return Pattern2Placement{{lhs, rhs}};
}

Expand Down Expand Up @@ -295,4 +295,54 @@ BroadcastTree ConstructBroadcastTree(const BroadcastLeaf& leaves) {
return ConstructBroadcastBranch(broadcastable_condition.value(), leaves);
}

namespace {

std::string ToTxtStringImpl(const BroadcastBranch<BroadcastTree>& branch) {
std::stringstream ss;
const auto& [cstr, lhs_eq_rhs, lhs_eq_one, rhs_eq_one] = branch.tuple();
const auto& [lhs, rhs] = *cstr;
const auto& Put = [&](const std::string& key, const auto& value) {
ss << "\"" << key << "\": ";
ss << ToTxtString(value);
ss << ",\n ";
};
ss << "{";
ss << "\"$lhs\": " << lhs << ",\n ";
ss << "\"$rhs\": " << rhs << ",\n ";
Put("$lhs == $rhs", lhs_eq_rhs);
Put("$lhs == 1", lhs_eq_one);
Put("$rhs == 1", rhs_eq_one);
ss << "}";
return ss.str();
}

std::string ToTxtStringImpl(const BroadcastLeaf& leaf) {
std::stringstream ss;
ss << "[";
for (const auto& dim_exprs : *leaf) {
ss << "[";
int j = 0;
for (const auto& dim_expr : dim_exprs) {
if (j++) {
ss << ",";
}
ss << dim_expr;
}
ss << "]";
}
ss << "]";
return ss.str();
}

} // namespace

std::string ToTxtString(const BroadcastTree& tree) {
return std::visit([&](const auto& impl) { return ToTxtStringImpl(impl); },
tree.variant());
}

std::ostream& operator<<(std::ostream& os, const BroadcastTree& tree) {
os << ToTxtString(tree);
}

} // namespace cinn::common
4 changes: 4 additions & 0 deletions paddle/cinn/common/broadcast_tree.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,8 @@ using BroadcastTree = adt::Tree<BroadcastBranch, BroadcastLeaf>;

BroadcastTree ConstructBroadcastTree(const BroadcastLeaf& leaves);

std::string ToTxtString(const BroadcastTree&);

std::ostream& operator<<(std::ostream& os, const BroadcastTree& tree);

} // namespace cinn::common
65 changes: 64 additions & 1 deletion paddle/cinn/hlir/dialect/operator/ir/manual_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@

#include <vector>
#include "glog/logging.h"
#include "paddle/cinn/common/dim_expr_simplify.h"
#include "paddle/cinn/hlir/dialect/operator/ir/generate_shape_util.h"
#include "paddle/common/ddim.h"
#include "paddle/common/enforce.h"
#include "paddle/fluid/pir/dialect/operator/ir/ir_meta_tensor.h"
Expand Down Expand Up @@ -190,7 +192,13 @@ void GenerateShapeOp::Build(
const std::vector<pir::Value>& inputs,
const std::vector<pir::Attribute>& output_dim_exprs,
const GenerateShapeOp::SymbolBindings& symbol_bindings) {
CHECK(!inputs.empty());
CHECK(!inputs.empty()) << ". output_dim_exprs: " << [&] {
std::stringstream ss;
for (const auto& attr : output_dim_exprs) {
ss << attr;
}
return ss.str();
}();
argument.AddInputs(inputs);
argument.AddAttribute("output_dim_exprs",
builder.array_attr(output_dim_exprs));
Expand Down Expand Up @@ -345,6 +353,61 @@ GenerateShapeOp::ConvertAttributeToSymbolBindings(
return std::move(ret);
}

bool GenerateShapeOp::InferSymbolicShape(
pir::ShapeConstraintIRAnalysis* shape_analysis) {
auto GetShapeOrDataDimExprs =
[&](pir::Value value) -> const symbol::ShapeOrDataDimExprs& {
return shape_analysis->GetShapeOrDataForValue(value);
};
auto SetShapeOrDataDimExprs =
[&](pir::Value value, const symbol::ShapeOrDataDimExprs& dim_exprs) {
shape_analysis->SetShapeOrDataForValue(value, dim_exprs);
};
const auto attr_dim_exprs = [&] {
std::vector<symbol::DimExpr> dim_exprs{};
pir::Attribute dim_expr_attr = this->attributes().at("output_dim_exprs");
CHECK(dim_expr_attr.isa<pir::ArrayAttribute>());
auto array = dim_expr_attr.dyn_cast<pir::ArrayAttribute>();
for (int i = 0; i < array.size(); ++i) {
const auto& dim_expr = ConvertAttributeToDimExpr(array.at(i));
CHECK(dim_expr.has_value());
dim_exprs.push_back(dim_expr.value());
}
return dim_exprs;
}();
const auto symbol_bindings = [&] {
pir::Attribute symbol_bindings_attr =
this->attributes().at("symbol_bindings");
auto symbol_bindings =
ConvertAttributeToSymbolBindings(symbol_bindings_attr);
CHECK(symbol_bindings.has_value());
return symbol_bindings.value();
}();
auto DimExprs4InputDim =
[&](int input_idx) -> const symbol::ShapeOrDataDimExprs& {
pir::Value input = this->operand_source(input_idx);
return GetShapeOrDataDimExprs(input);
};
auto DimExprs4SymbolName =
MakeGetterDimExpr4SymbolName(symbol_bindings, DimExprs4InputDim);
const auto substituted_dim_exprs = [&] {
std::vector<symbol::DimExpr> dim_exprs{};
dim_exprs.reserve(attr_dim_exprs.size());
for (const auto& attr_dim_expr : attr_dim_exprs) {
const auto& substituted =
SubstituteDimExpr(attr_dim_expr, DimExprs4SymbolName);
const auto& simplified = common::SimplifyDimExpr(substituted);
dim_exprs.push_back(simplified);
}
return dim_exprs;
}();
const auto shape_or_data_dim_exprs =
symbol::ShapeOrDataDimExprs::MakeConsistentShapeOrData(
substituted_dim_exprs);
SetShapeOrDataDimExprs(this->out(), shape_or_data_dim_exprs);
return true;
}

} // namespace dialect
} // namespace cinn

Expand Down
8 changes: 7 additions & 1 deletion paddle/cinn/hlir/dialect/operator/ir/manual_op.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,15 @@

#pragma once
#include <variant>
#include "paddle/fluid/pir/dialect/operator/interface/infer_symbolic_shape.h"
#include "paddle/phi/core/infermeta_utils.h"
#include "paddle/pir/core/builder.h"
#include "paddle/pir/core/dll_decl.h"
#include "paddle/pir/core/ir_printer.h"
#include "paddle/pir/core/op_base.h"
#include "paddle/pir/core/operation.h"
#include "paddle/pir/core/operation_utils.h"
#include "paddle/pir/dialect/shape/utils/shape_utils.h"

namespace cinn {
namespace dialect {
Expand Down Expand Up @@ -83,7 +85,9 @@ class IR_API SplitOp : public pir::Op<SplitOp> {
void VerifySig() const {}
};

class IR_API GenerateShapeOp : public pir::Op<GenerateShapeOp> {
class IR_API GenerateShapeOp
: public pir::Op<GenerateShapeOp,
paddle::dialect::InferSymbolicShapeInterface> {
public:
using Op::Op;
static const char *name() { return "cinn_op.generate_shape"; }
Expand Down Expand Up @@ -113,6 +117,8 @@ class IR_API GenerateShapeOp : public pir::Op<GenerateShapeOp> {

pir::OpResult out() { return result(0); }

bool InferSymbolicShape(pir::ShapeConstraintIRAnalysis *shape_analysis);

static pir::Attribute ConvertSymbolBindingsToAttribute(
pir::Builder &builder, const SymbolBindings &symbol_bindings); // NOLINT
static std::optional<SymbolBindings> ConvertAttributeToSymbolBindings(
Expand Down
6 changes: 2 additions & 4 deletions paddle/cinn/hlir/dialect/operator/transforms/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,7 @@ if(NOT CINN_ONLY)
cinn_runtime_dialect
pir_compiler)

cc_library(
cinn_transforms
SRCS ${cinn_transforms_srcs}
DEPS ${cinn_transforms_deps})
cinn_cc_library(cinn_transforms SRCS ${cinn_transforms_srcs} DEPS
${cinn_transforms_deps})

endif()
Original file line number Diff line number Diff line change
Expand Up @@ -139,9 +139,8 @@ bool ProcessOp(paddle::dialect::ExpandOp op, pir::PatternRewriter* rewriter) {
pir::ShapeConstraintIRAnalysis& shape_analysis =
pir::ShapeAnalysisManager::Instance().Get(
op.x().defining_op()->GetParentProgram());
CHECK(shape_analysis.value_id_to_shapeordata_.find(GetValueId(&value)) !=
shape_analysis.value_id_to_shapeordata_.end());
return shape_analysis.value_id_to_shapeordata_.at(GetValueId(&value));

return shape_analysis.GetShapeOrDataForValue(value);
};
std::optional<pir::Value> opt_generated_shape =
GetOutOfRewritedGenerateShapeOp(
Expand Down
Loading

0 comments on commit 51f712e

Please sign in to comment.