You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+12-12
Original file line number
Diff line number
Diff line change
@@ -55,9 +55,9 @@
55
55
56
56
<table>
57
57
<tr>
58
-
<td><img src="http://remicadene.com/assets/gif/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
59
-
<td><img src="http://remicadene.com/assets/gif/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
60
-
<td><img src="http://remicadene.com/assets/gif/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
58
+
<td><img src="media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
59
+
<td><img src="media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
60
+
<td><img src="media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
61
61
</tr>
62
62
<tr>
63
63
<td align="center">ACT policy on ALOHA env</td>
@@ -144,7 +144,7 @@ wandb login
144
144
145
145
### Visualize datasets
146
146
147
-
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically download data from the Hugging Face hub.
147
+
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
148
148
149
149
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
or from a dataset in a local folder with the root`DATA_DIR` environment variable (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
156
+
or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
210
212
- hf_dataset stored using Hugging Face datasets library serialization to parquet
211
-
- videos are stored in mp4 format to save space or png files
212
-
- episode_data_index saved using `safetensor` tensor serialization format
213
-
- stats saved using `safetensor` tensor serialization format
214
-
- info are saved using JSON
213
+
- videos are stored in mp4 format to save space
214
+
- metadata are stored in plain json/jsonl files
215
215
216
-
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can set the `DATA_DIR` environment variable to your root dataset folder as illustrated in the above section on dataset visualization.
216
+
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can use the `local_files_only` argument and specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
217
217
218
218
### Evaluate a pretrained policy
219
219
@@ -280,7 +280,7 @@ To use wandb for logging training and evaluation curves, make sure you've run `w
280
280
wandb.enable=true
281
281
```
282
282
283
-
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explaination of some commonly used metrics in logs.
283
+
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
234
231
```bash
235
-
DATA_DIR=data python lerobot/scripts/train.py \
232
+
python lerobot/scripts/train.py \
236
233
dataset_repo_id=${HF_USER}/so100_test \
237
234
policy=act_so100_real \
238
235
env=so100_real \
@@ -248,7 +245,6 @@ Let's explain it:
248
245
3. We provided an environment as argument with `env=so100_real`. This loads configurations from [`lerobot/configs/env/so100_real.yaml`](../lerobot/configs/env/so100_real.yaml).
249
246
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
250
247
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
251
-
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
252
248
253
249
Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
254
250
@@ -259,7 +255,6 @@ You can use the `record` function from [`lerobot/scripts/control_robot.py`](../l
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
234
231
```bash
235
-
DATA_DIR=data python lerobot/scripts/train.py \
232
+
python lerobot/scripts/train.py \
236
233
dataset_repo_id=${HF_USER}/moss_test \
237
234
policy=act_moss_real \
238
235
env=moss_real \
@@ -248,7 +245,6 @@ Let's explain it:
248
245
3. We provided an environment as argument with `env=moss_real`. This loads configurations from [`lerobot/configs/env/moss_real.yaml`](../lerobot/configs/env/moss_real.yaml).
249
246
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
250
247
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
251
-
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
252
248
253
249
Training should take several hours. You will find checkpoints in `outputs/train/act_moss_test/checkpoints`.
254
250
@@ -259,7 +255,6 @@ You can use the `record` function from [`lerobot/scripts/control_robot.py`](../l
0 commit comments