Skip to content

Commit 965bcab

Browse files
authored
Fix typos in docstrings (pytorch#7858)
1 parent a7501e1 commit 965bcab

File tree

11 files changed

+18
-18
lines changed

11 files changed

+18
-18
lines changed

cmake/iOS.cmake

+3-3
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@
1010
# SIMULATOR - used to build for the Simulator platforms, which have an x86 arch.
1111
#
1212
# CMAKE_IOS_DEVELOPER_ROOT = automatic(default) or /path/to/platform/Developer folder
13-
# By default this location is automatcially chosen based on the IOS_PLATFORM value above.
13+
# By default this location is automatically chosen based on the IOS_PLATFORM value above.
1414
# If set manually, it will override the default location and force the user of a particular Developer Platform
1515
#
1616
# CMAKE_IOS_SDK_ROOT = automatic(default) or /path/to/platform/Developer/SDKs/SDK folder
17-
# By default this location is automatcially chosen based on the CMAKE_IOS_DEVELOPER_ROOT value.
17+
# By default this location is automatically chosen based on the CMAKE_IOS_DEVELOPER_ROOT value.
1818
# In this case it will always be the most up-to-date SDK found in the CMAKE_IOS_DEVELOPER_ROOT path.
1919
# If set manually, this will force the use of a specific SDK version
2020

@@ -100,7 +100,7 @@ if(IOS_DEPLOYMENT_TARGET)
100100
set(XCODE_IOS_PLATFORM_VERSION_FLAGS "-m${XCODE_IOS_PLATFORM}-version-min=${IOS_DEPLOYMENT_TARGET}")
101101
endif()
102102

103-
# Hidden visibilty is required for cxx on iOS
103+
# Hidden visibility is required for cxx on iOS
104104
set(CMAKE_C_FLAGS_INIT "${XCODE_IOS_PLATFORM_VERSION_FLAGS}")
105105
set(CMAKE_CXX_FLAGS_INIT "${XCODE_IOS_PLATFORM_VERSION_FLAGS} -fvisibility-inlines-hidden")
106106

docs/source/models/fcos.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Model builders
1212
--------------
1313

1414
The following model builders can be used to instantiate a FCOS model, with or
15-
without pre-trained weights. All the model buidlers internally rely on the
15+
without pre-trained weights. All the model builders internally rely on the
1616
``torchvision.models.detection.fcos.FCOS`` base class. Please refer to the `source code
1717
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/fcos.py>`_ for
1818
more details about this class.

docs/source/models/retinanet.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Model builders
1212
--------------
1313

1414
The following model builders can be used to instantiate a RetinaNet model, with or
15-
without pre-trained weights. All the model buidlers internally rely on the
15+
without pre-trained weights. All the model builders internally rely on the
1616
``torchvision.models.detection.retinanet.RetinaNet`` base class. Please refer to the `source code
1717
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/retinanet.py>`_ for
1818
more details about this class.

docs/source/models/vgg.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Model builders
1111
--------------
1212

1313
The following model builders can be used to instantiate a VGG model, with or
14-
without pre-trained weights. All the model buidlers internally rely on the
14+
without pre-trained weights. All the model builders internally rely on the
1515
``torchvision.models.vgg.VGG`` base class. Please refer to the `source code
1616
<https://github.com/pytorch/vision/blob/main/torchvision/models/vgg.py>`_ for
1717
more details about this class.

gallery/others/plot_optical_flow.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ def preprocess(img1_batch, img2_batch):
134134
# (N, 2, H, W) batch of predicted flows that corresponds to a given "iteration"
135135
# in the model. For more details on the iterative nature of the model, please
136136
# refer to the `original paper <https://arxiv.org/abs/2003.12039>`_. Here, we
137-
# are only interested in the final predicted flows (they are the most acccurate
137+
# are only interested in the final predicted flows (they are the most accurate
138138
# ones), so we will just retrieve the last item in the list.
139139
#
140140
# As described above, a flow is a tensor with dimensions (2, H, W) (or (N, 2, H,
@@ -151,7 +151,7 @@ def preprocess(img1_batch, img2_batch):
151151
# %%
152152
# Visualizing predicted flows
153153
# ---------------------------
154-
# Torchvision provides the :func:`~torchvision.utils.flow_to_image` utlity to
154+
# Torchvision provides the :func:`~torchvision.utils.flow_to_image` utility to
155155
# convert a flow into an RGB image. It also supports batches of flows.
156156
# each "direction" in the flow will be mapped to a given RGB color. In the
157157
# images below, pixels with similar colors are assumed by the model to be moving

gallery/v2_transforms/plot_custom_transforms.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ def forward(self, img, bboxes, label): # we assume inputs are always structured
8484
# In the section above, we have assumed that you already know the structure of
8585
# your inputs and that you're OK with hard-coding this expected structure in
8686
# your code. If you want your custom transforms to be as flexible as possible,
87-
# this can be a bit limitting.
87+
# this can be a bit limiting.
8888
#
8989
# A key feature of the builtin Torchvision V2 transforms is that they can accept
9090
# arbitrary input structure and return the same structure as output (with

test/test_models.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1037,7 +1037,7 @@ def test_raft(model_fn, scripted):
10371037
torch.manual_seed(0)
10381038

10391039
# We need very small images, otherwise the pickle size would exceed the 50KB
1040-
# As a resut we need to override the correlation pyramid to not downsample
1040+
# As a result we need to override the correlation pyramid to not downsample
10411041
# too much, otherwise we would get nan values (effective H and W would be
10421042
# reduced to 1)
10431043
corr_block = models.optical_flow.raft.CorrBlock(num_levels=2, radius=2)

torchvision/datapoints/_dataset_wrapper.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -37,17 +37,17 @@ def wrap_dataset_for_transforms_v2(dataset, target_keys=None):
3737
* :class:`~torchvision.datasets.CocoDetection`: Instead of returning the target as list of dicts, the wrapper
3838
returns a dict of lists. In addition, the key-value-pairs ``"boxes"`` (in ``XYXY`` coordinate format),
3939
``"masks"`` and ``"labels"`` are added and wrap the data in the corresponding ``torchvision.datapoints``.
40-
The original keys are preserved. If ``target_keys`` is ommitted, returns only the values for the
40+
The original keys are preserved. If ``target_keys`` is omitted, returns only the values for the
4141
``"image_id"``, ``"boxes"``, and ``"labels"``.
4242
* :class:`~torchvision.datasets.VOCDetection`: The key-value-pairs ``"boxes"`` and ``"labels"`` are added to
4343
the target and wrap the data in the corresponding ``torchvision.datapoints``. The original keys are
44-
preserved. If ``target_keys`` is ommitted, returns only the values for the ``"boxes"`` and ``"labels"``.
44+
preserved. If ``target_keys`` is omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
4545
* :class:`~torchvision.datasets.CelebA`: The target for ``target_type="bbox"`` is converted to the ``XYXY``
4646
coordinate format and wrapped into a :class:`~torchvision.datapoints.BoundingBoxes` datapoint.
4747
* :class:`~torchvision.datasets.Kitti`: Instead returning the target as list of dicts, the wrapper returns a
4848
dict of lists. In addition, the key-value-pairs ``"boxes"`` and ``"labels"`` are added and wrap the data
4949
in the corresponding ``torchvision.datapoints``. The original keys are preserved. If ``target_keys`` is
50-
ommitted, returns only the values for the ``"boxes"`` and ``"labels"``.
50+
omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
5151
* :class:`~torchvision.datasets.OxfordIIITPet`: The target for ``target_type="segmentation"`` is wrapped into a
5252
:class:`~torchvision.datapoints.Mask` datapoint.
5353
* :class:`~torchvision.datasets.Cityscapes`: The target for ``target_type="semantic"`` is wrapped into a

torchvision/datasets/_stereo_matching.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -796,7 +796,7 @@ def _read_disparity(self, file_path: str) -> Tuple[np.ndarray, None]:
796796
# in order to extract disparity from depth maps
797797
camera_settings_path = Path(file_path).parent / "_camera_settings.json"
798798
with open(camera_settings_path, "r") as f:
799-
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constatnt)
799+
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constant)
800800
intrinsics = json.load(f)
801801
focal = intrinsics["camera_settings"][0]["intrinsic_settings"]["fx"]
802802
baseline, pixel_constant = 6, 100 # pixel constant is inverted

torchvision/io/video_reader.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -91,14 +91,14 @@ class VideoReader:
9191
9292
Each stream descriptor consists of two parts: stream type (e.g. 'video') and
9393
a unique stream id (which are determined by the video encoding).
94-
In this way, if the video contaner contains multiple
94+
In this way, if the video container contains multiple
9595
streams of the same type, users can access the one they want.
9696
If only stream type is passed, the decoder auto-detects first stream of that type.
9797
9898
Args:
9999
src (string, bytes object, or tensor): The media source.
100100
If string-type, it must be a file path supported by FFMPEG.
101-
If bytes should be an in memory representatin of a file supported by FFMPEG.
101+
If bytes, should be an in-memory representation of a file supported by FFMPEG.
102102
If Tensor, it is interpreted internally as byte buffer.
103103
It must be one-dimensional, of type ``torch.uint8``.
104104
@@ -279,7 +279,7 @@ def set_current_stream(self, stream: str) -> bool:
279279
Currently available stream types include ``['video', 'audio']``.
280280
Each descriptor consists of two parts: stream type (e.g. 'video') and
281281
a unique stream id (which are determined by video encoding).
282-
In this way, if the video contaner contains multiple
282+
In this way, if the video container contains multiple
283283
streams of the same type, users can access the one they want.
284284
If only stream type is passed, the decoder auto-detects first stream
285285
of that type and returns it.

torchvision/transforms/v2/_geometry.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1023,7 +1023,7 @@ class ElasticTransform(Transform):
10231023
10241024
.. note::
10251025
Implementation to transform bounding boxes is approximative (not exact).
1026-
We construct an approximation of the inverse grid as ``inverse_grid = idenity - displacement``.
1026+
We construct an approximation of the inverse grid as ``inverse_grid = identity - displacement``.
10271027
This is not an exact inverse of the grid used to transform images, i.e. ``grid = identity + displacement``.
10281028
Our assumption is that ``displacement * displacement`` is small and can be ignored.
10291029
Large displacements would lead to large errors in the approximation.

0 commit comments

Comments
 (0)