Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tmva] Fix warnings in TMVA #14265

Merged
merged 2 commits into from
Dec 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions tmva/sofie/inc/TMVA/ROperator_Comparision.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -68,12 +68,12 @@ public:
fNX1(UTILITY::Clean_name(nameX1)), fNX2(UTILITY::Clean_name(nameX2)), fNY(UTILITY::Clean_name(nameY)){}

// type of output given input
std::vector<ETensorType> TypeInference(std::vector<ETensorType> input){
std::vector<ETensorType> TypeInference(std::vector<ETensorType> input) override {
return input;
}

// shape of output tensors given input tensors
std::vector<std::vector<size_t>> ShapeInference(std::vector<std::vector<size_t>> input){
std::vector<std::vector<size_t>> ShapeInference(std::vector<std::vector<size_t>> input) override {
auto ret = input; // return vector size 1 with first input
return ret;
}
Expand Down Expand Up @@ -132,7 +132,7 @@ public:
model.AddIntermediateTensor(fNY, ETensorType::BOOL , fShapeY);
}

std::string Generate(std::string OpName){
std::string Generate(std::string OpName) override {
OpName = "op_" + OpName;

if (fShapeY.empty()) {
Expand Down Expand Up @@ -176,4 +176,4 @@ public:
}//TMVA


#endif //TMVA_SOFIE_ROperator_Comparision
#endif //TMVA_SOFIE_ROperator_Comparision
4 changes: 2 additions & 2 deletions tmva/tmva/src/MethodDL.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -467,11 +467,11 @@ void MethodDL::ParseInputLayout()
// when we will support 3D convolutions we would need to add extra 1's
if (inputShape.size() == 2) {
// case of dense layer where only width is specified
inputShape.insert(inputShape.begin() + 1, {1,1});
inputShape = {inputShape[0], 1, 1, inputShape[1]};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why this gives a warning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, sorry. It could even be a compiler bug, or maybe because the iterator is invalidated after the first insertion in case a re-allocation happens.

In any case, I think the new formulation of the logic is cleaner, so I wouldn't try harder to really understand this warning.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, fine!

}
else if (inputShape.size() == 3) {
//e.g. case of RNN T,W -> T,1,W
inputShape.insert(inputShape.begin() + 2, 1);
inputShape = {inputShape[0], inputShape[1], 1, inputShape[2]};
}

this->SetInputShape(inputShape);
Expand Down