forked from PaddlePaddle/docs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
update compare docs (PaddlePaddle#6330)
中英文档修复
- Loading branch information
Showing
2 changed files
with
57 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
.. _cn_api_paddle_amp_debugging_check_layer_numerics: | ||
|
||
check_layer_numerics | ||
------------------------------- | ||
|
||
.. py:function:: paddle.amp.debugging.check_layer_numerics(func) | ||
这个装饰器用于检查层的输入和输出数据的数值。 | ||
|
||
|
||
参数 | ||
::::::::: | ||
|
||
- **func** (callable) – 将要被装饰的函数。 | ||
|
||
返回 | ||
::::::::: | ||
``func``被装饰后的函数。 | ||
形状 | ||
:::::::::::: | ||
``callable``. | ||
|
||
代码示例 | ||
:::::::::::: | ||
|
||
COPY-FROM: paddle.amp.debugging.check_layer_numerics |
30 changes: 30 additions & 0 deletions
30
docs/api/paddle/incubate/nn/fused_linear_activation_cn.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
.. _cn_api_paddle_incubate_nn_functional_fused_linear_activation: | ||
|
||
fused_linear_activation | ||
------------------------------- | ||
|
||
.. py:function:: paddle.incubate.nn.functional.fused_linear_activation(x, y, bias, trans_x=False, trans_y=False, activation=None) | ||
全连接线性和激活变换操作符。该方法要求 CUDA 版本大于等于 11.6。 | ||
|
||
|
||
参数 | ||
::::::::: | ||
|
||
- **x** (Tensor) – 需要进行乘法运算的输入 Tensor 。 | ||
- **y** (Tensor) – 需要进行乘法运算的权重 Tensor 。它的阶数必须为2。 | ||
- **bias** (Tensor) – 输入的偏差Tensor,该偏差会加到矩阵乘法的结果上。 | ||
- **trans_x** (bool, 可选) - 是否在乘法之前对 x 进行矩阵转置。 | ||
- **trans_y** (bool, 可选) - 是否在乘法之前对 y 进行矩阵转置。 | ||
- **activation** (str, 可选) - 目前,可用的激活函数仅限于“GELU”(高斯误差线性单元)和“ReLU”(修正线性单元)。这些激活函数应用于偏置加和的输出上。默认值:None。 | ||
|
||
返回 | ||
::::::::: | ||
|
||
输出 ``Tensor`` | ||
|
||
|
||
代码示例 | ||
:::::::::::: | ||
|
||
COPY-FROM: paddle.incubate.nn.functional.fused_linear_activation |