Skip to content

Commit

Permalink
Merge branch 'main' into 2.6-RC-TEST
Browse files Browse the repository at this point in the history
  • Loading branch information
svekars authored Jan 14, 2025
2 parents de1cb75 + f7d06b6 commit b08b70d
Show file tree
Hide file tree
Showing 4 changed files with 19 additions and 9 deletions.
4 changes: 4 additions & 0 deletions advanced_source/cpp_custom_ops.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ Custom C++ and CUDA Operators
* PyTorch 2.4 or later
* Basic understanding of C++ and CUDA programming

.. note::

This tutorial will also work on AMD ROCm with no additional modifications.

PyTorch offers a large library of operators that work on Tensors (e.g. torch.add, torch.sum, etc).
However, you may wish to bring a new custom operator to PyTorch. This tutorial demonstrates the
blessed path to authoring a custom operator written in C++/CUDA.
Expand Down
8 changes: 4 additions & 4 deletions advanced_source/pendulum.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@
In the process, we will touch three crucial components of TorchRL:
* `environments <https://pytorch.org/rl/reference/envs.html>`__
* `transforms <https://pytorch.org/rl/reference/envs.html#transforms>`__
* `models (policy and value function) <https://pytorch.org/rl/reference/modules.html>`__
* `environments <https://pytorch.org/rl/stable/reference/envs.html>`__
* `transforms <https://pytorch.org/rl/stable/reference/envs.html#transforms>`__
* `models (policy and value function) <https://pytorch.org/rl/stable/reference/modules.html>`__
"""

Expand Down Expand Up @@ -384,7 +384,7 @@ def _reset(self, tensordict):
# convenient shortcuts to the content of the output and input spec containers.
#
# TorchRL offers multiple :class:`~torchrl.data.TensorSpec`
# `subclasses <https://pytorch.org/rl/reference/data.html#tensorspec>`_ to
# `subclasses <https://pytorch.org/rl/stable/reference/data.html#tensorspec>`_ to
# encode the environment's input and output characteristics.
#
# Specs shape
Expand Down
4 changes: 2 additions & 2 deletions beginner_source/blitz/autograd_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,15 +191,15 @@
# .. math::
#
#
# J^{T}\cdot \vec{v} = m \cdot \left(\begin{array}{ccc}
# J^{T}\cdot \vec{v} = \left(\begin{array}{ccc}
# \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\
# \vdots & \ddots & \vdots\\
# \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
# \end{array}\right)\left(\begin{array}{c}
# \frac{\partial l}{\partial y_{1}}\\
# \vdots\\
# \frac{\partial l}{\partial y_{m}}
# \end{array}\right) = m \cdot \left(\begin{array}{c}
# \end{array}\right) = \left(\begin{array}{c}
# \frac{\partial l}{\partial x_{1}}\\
# \vdots\\
# \frac{\partial l}{\partial x_{n}}
Expand Down
12 changes: 9 additions & 3 deletions intermediate_source/dist_tuto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ the following template.
"""run.py:"""
#!/usr/bin/env python
import os
import sys
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
Expand All @@ -66,8 +67,12 @@ the following template.
if __name__ == "__main__":
world_size = 2
processes = []
mp.set_start_method("spawn")
for rank in range(world_size):
if "google.colab" in sys.modules:
print("Running in Google Colab")
mp.get_context("spawn")
else:
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, world_size, run))
p.start()
processes.append(p)
Expand Down Expand Up @@ -156,7 +161,8 @@ we should not modify the sent tensor nor access the received tensor before ``req
In other words,

- writing to ``tensor`` after ``dist.isend()`` will result in undefined behaviour.
- reading from ``tensor`` after ``dist.irecv()`` will result in undefined behaviour.
- reading from ``tensor`` after ``dist.irecv()`` will result in undefined
behaviour, until ``req.wait()`` has been executed.

However, after ``req.wait()``
has been executed we are guaranteed that the communication took place,
Expand Down

0 comments on commit b08b70d

Please sign in to comment.