Skip to content

Commit 305330f

Browse files
Correct typos in Classification README.md (#8392)
Co-authored-by: Nicolas Hug <nh.nicolas.hug@gmail.com>
1 parent 89d2b38 commit 305330f

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

references/classification/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Here `$MODEL` is one of `efficientnet_v2_s` and `efficientnet_v2_m`.
120120
Note that the Small variant had a `$TRAIN_SIZE` of `300` and a `$EVAL_SIZE` of `384`, while the Medium `384` and `480` respectively.
121121

122122
Note that the above command corresponds to training on a single node with 8 GPUs.
123-
For generatring the pre-trained weights, we trained with 4 nodes, each with 8 GPUs (for a total of 32 GPUs),
123+
For generating the pre-trained weights, we trained with 4 nodes, each with 8 GPUs (for a total of 32 GPUs),
124124
and `--batch_size 32`.
125125

126126
The weights of the Large variant are ported from the original paper rather than trained from scratch. See the `EfficientNet_V2_L_Weights` entry for their exact preprocessing transforms.
@@ -167,7 +167,7 @@ torchrun --nproc_per_node=8 train.py\
167167
```
168168

169169
Note that the above command corresponds to training on a single node with 8 GPUs.
170-
For generatring the pre-trained weights, we trained with 8 nodes, each with 8 GPUs (for a total of 64 GPUs),
170+
For generating the pre-trained weights, we trained with 8 nodes, each with 8 GPUs (for a total of 64 GPUs),
171171
and `--batch_size 64`.
172172

173173
#### vit_b_32
@@ -180,7 +180,7 @@ torchrun --nproc_per_node=8 train.py\
180180
```
181181

182182
Note that the above command corresponds to training on a single node with 8 GPUs.
183-
For generatring the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
183+
For generating the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
184184
and `--batch_size 256`.
185185

186186
#### vit_l_16
@@ -193,7 +193,7 @@ torchrun --nproc_per_node=8 train.py\
193193
```
194194

195195
Note that the above command corresponds to training on a single node with 8 GPUs.
196-
For generatring the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
196+
For generating the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
197197
and `--batch_size 64`.
198198

199199
#### vit_l_32
@@ -206,7 +206,7 @@ torchrun --nproc_per_node=8 train.py\
206206
```
207207

208208
Note that the above command corresponds to training on a single node with 8 GPUs.
209-
For generatring the pre-trained weights, we trained with 8 nodes, each with 8 GPUs (for a total of 64 GPUs),
209+
For generating the pre-trained weights, we trained with 8 nodes, each with 8 GPUs (for a total of 64 GPUs),
210210
and `--batch_size 64`.
211211

212212

@@ -221,7 +221,7 @@ torchrun --nproc_per_node=8 train.py\
221221
Here `$MODEL` is one of `convnext_tiny`, `convnext_small`, `convnext_base` and `convnext_large`. Note that each variant had its `--val-resize-size` optimized in a post-training step, see their `Weights` entry for their exact value.
222222

223223
Note that the above command corresponds to training on a single node with 8 GPUs.
224-
For generatring the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
224+
For generating the pre-trained weights, we trained with 2 nodes, each with 8 GPUs (for a total of 16 GPUs),
225225
and `--batch_size 64`.
226226

227227

0 commit comments

Comments
 (0)