-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Macro] Increase macro constant MAX_RANK_SUPPORTED #63061
[Macro] Increase macro constant MAX_RANK_SUPPORTED #63061
Conversation
[pull] develop from PaddlePaddle:develop
[pull] develop from PaddlePaddle:develop
你的PR提交成功,感谢你对开源项目的贡献! |
❌ The PR is not created using PR's template. You can refer to this Demo. |
1da3c45
to
b97808a
Compare
9a8f047
to
0a5944a
Compare
1aa3783
to
6e87086
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR Category
Operator Mechanism
PR Types
Improvements
Description
prod_grad
composite OP will fail when given inputx.ndim>6
, becauseprod_grad
callsexpand
OP in its' implementation. So a fine solution is increasingMAX_RANK_SUPPORTED
from 6 to 8.after modification, 8D unitests can be passed successfully as below:
$ python test/legacy_test/test_reduce_op.py