Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.46】API转换 84-102 -part #6281

Merged
merged 6 commits into from
Nov 6, 2023
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@ torch.distributed.all_gather(tensor_list, tensor, group=None, async_op=False)
### [paddle.distributed.all_gather](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/distributed/all_gather_cn.html)

```python
paddle.distributed.all_gather(tensor_list, tensor, group=0)
paddle.distributed.all_gather(tensor_list, tensor, group=0, sync_op=True)
```

Pytorch 相比 Paddle 支持更多其他参数,具体如下:

### 参数映射

| PyTorch | PaddlePaddle | 备注 |
| ----------- | ------------ | --------------------------------------------- |
| tensor_list | tensor_list | 操作的输出 Tensor 列表。 |
| tensor | tensor | 操作的输入 Tensor。 |
| group | group | 工作的进程组编号。 |
| async_op | - | 是否异步操作,Paddle 无此参数,暂无转写方式。 |
| PyTorch | PaddlePaddle | 备注 |
| ----------- | ------------ | --------------------------------------------------------------- |
| tensor_list | tensor_list | 操作的输出 Tensor 列表。 |
| tensor | tensor | 操作的输入 Tensor。 |
| group | group | 工作的进程组编号。 |
| async_op | sync_op | torch 为是否异步操作,Paddle 为是否同步操作,转写方式取反即可。 |
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ torch.distributed.all_reduce(tensor, op=<torch.distributed.distributed_c10d.Redu
### [paddle.distributed.all_reduce](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/distributed/all_reduce_cn.html)

```python
paddle.distributed.all_reduce(tensor, op=ReduceOp.SUM, group=0)
paddle.distributed.all_reduce(tensor, op=ReduceOp.SUM, group=0, sync_op=True)
```

Pytorch 相比 Paddle 支持更多其他参数,具体如下:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
## [参数完全一致]torch.distributed.get_backend

### [torch.distributed.get_backend](https://pytorch.org/docs/stable/distributed.html#torch.distributed.get_backend)

```python
torch.distributed.get_backend(group=None)
```

### [paddle.distributed.get_backend](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/distributed/get_backend_cn.html#get-backend)

```python
paddle.distributed.get_backend(group=None)
```

其中功能一致, 参数完全一致,具体如下:

### 参数映射

| PyTorch | PaddlePaddle | 备注 |
| ------- | ------------ | ---- |
| group | group | 指定的通信组。 |
Original file line number Diff line number Diff line change
Expand Up @@ -1116,6 +1116,7 @@
| REFERENCE-MAPPING-ITEM(`torch.distributed.scatter`, https://github.com/PaddlePaddle/docs/tree/develop/docs/guides/model_convert/convert_from_pytorch/api_difference/distributed/torch.distributed.scatter.md) |
| REFERENCE-MAPPING-ITEM(`torch.distributed.scatter_object_list`, https://github.com/PaddlePaddle/docs/tree/develop/docs/guides/model_convert/convert_from_pytorch/api_difference/distributed/torch.distributed.scatter_object_list.md) |
| REFERENCE-MAPPING-ITEM(`torch.distributed.send`, https://github.com/PaddlePaddle/docs/tree/develop/docs/guides/model_convert/convert_from_pytorch/api_difference/distributed/torch.distributed.send.md) |
| [torch.distributed.gather](https://pytorch.org/docs/stable/distributed.html#torch.distributed.gather) | 功能缺失 |


***持续更新...***
Expand Down