Skip to content

Commit 913bcab

Browse files
villekuosmanenHiroIshidaalexander-soareohharsenivelin
authored
chore: sync fork with upstream (#3)
* feat: enable to use multiple rgb encoders per camera in diffusion policy (huggingface#484) Co-authored-by: Alexander Soare <alexander.soare159@gmail.com> * Fix config file (huggingface#495) * fix: broken images and a few minor typos in README (huggingface#499) Signed-off-by: ivelin <ivelin117@gmail.com> * Add support for Windows (huggingface#494) * bug causes error uploading to huggingface, unicode issue on windows. (huggingface#450) * Add distinction between two unallowed cases in name check "eval_" (huggingface#489) * Rename deprecated argument (temporal_ensemble_momentum) (huggingface#490) * Dataset v2.0 (huggingface#461) Co-authored-by: Remi <remi.cadene@huggingface.co> * Refactor OpenX (huggingface#505) * Fix missing local_files_only in record/replay (huggingface#540) Co-authored-by: Simon Alibert <alibert.sim@gmail.com> * Control simulated robot with real leader (huggingface#514) Co-authored-by: Remi <remi.cadene@huggingface.co> * Update 7_get_started_with_real_robot.md (huggingface#559) * LerobotDataset pushable to HF from any folder (huggingface#563) * Fix example 6 (huggingface#572) * fixing typo from 'teloperation' to 'teleoperation' (huggingface#566) * [vizualizer] for LeRobodDataset V2 (huggingface#576) * Fix broken `create_lerobot_dataset_card` (huggingface#590) * feat(act): support training end of episode token to ACT model * changes * feat(arx): add arx arm (#2) * feat(arx): support arx arm * changes * changes * changes * changes * pass pipes explicitly * changes * us ndarray over a pipe * changes * changes * replay basically works * patch arx sdk * changes * support cameras in arx5 * rename to arx5 * kind of works * changes * changes * changes * various changes * changes * revert a few changes * changes * changes * changes * changes * changes * changes * changes * changes * changes * remove TODO * allow multiple tasks --------- Signed-off-by: ivelin <ivelin117@gmail.com> Co-authored-by: Hirokazu Ishida <38597814+HiroIshida@users.noreply.github.com> Co-authored-by: Alexander Soare <alexander.soare159@gmail.com> Co-authored-by: Arsen Ohanyan <arsenohanyan@gmail.com> Co-authored-by: Ivelin Ivanov <ivelin117@gmail.com> Co-authored-by: Daniel Ritchie <daniel@brainwavecollective.ai> Co-authored-by: resolver101757 <kelster101757@hotmail.com> Co-authored-by: Jannik Grothusen <56967823+J4nn1K@users.noreply.github.com> Co-authored-by: KasparSLT <133706781+KasparSLT@users.noreply.github.com> Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> Co-authored-by: Remi <remi.cadene@huggingface.co> Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co> Co-authored-by: Simon Alibert <alibert.sim@gmail.com> Co-authored-by: berjaoui <berjaoui@gmail.com> Co-authored-by: Claudio Coppola <Claudiocoppola90@gmail.com> Co-authored-by: s1lent4gnt <kmeftah.khalil@gmail.com> Co-authored-by: Mishig <dmishig@gmail.com> Co-authored-by: Eugene Mironov <helper2424@gmail.com>
1 parent 458d41e commit 913bcab

File tree

88 files changed

+7287
-4323
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

88 files changed

+7287
-4323
lines changed

.github/PULL_REQUEST_TEMPLATE.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Provide a simple way for the reviewer to try out your changes.
2121

2222
Examples:
2323
```bash
24-
DATA_DIR=tests/data pytest -sx tests/test_stuff.py::test_something
24+
pytest -sx tests/test_stuff.py::test_something
2525
```
2626
```bash
2727
python lerobot/scripts/train.py --some.option=true

.github/workflows/nightly-tests.yml

+1-7
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,8 @@ on:
77
schedule:
88
- cron: "0 2 * * *"
99

10-
env:
11-
DATA_DIR: tests/data
10+
# env:
1211
# SLACK_API_TOKEN: ${{ secrets.SLACK_API_TOKEN }}
13-
1412
jobs:
1513
run_all_tests_cpu:
1614
name: CPU
@@ -30,13 +28,9 @@ jobs:
3028
working-directory: /lerobot
3129
steps:
3230
- name: Tests
33-
env:
34-
DATA_DIR: tests/data
3531
run: pytest -v --cov=./lerobot --disable-warnings tests
3632

3733
- name: Tests end-to-end
38-
env:
39-
DATA_DIR: tests/data
4034
run: make test-end-to-end
4135

4236

.github/workflows/test.yml

+36-39
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ jobs:
2929
name: Pytest
3030
runs-on: ubuntu-latest
3131
env:
32-
DATA_DIR: tests/data
3332
MUJOCO_GL: egl
3433
steps:
3534
- uses: actions/checkout@v4
@@ -70,7 +69,6 @@ jobs:
7069
name: Pytest (minimal install)
7170
runs-on: ubuntu-latest
7271
env:
73-
DATA_DIR: tests/data
7472
MUJOCO_GL: egl
7573
steps:
7674
- uses: actions/checkout@v4
@@ -103,40 +101,39 @@ jobs:
103101
-W ignore::UserWarning:gymnasium.utils.env_checker:247 \
104102
&& rm -rf tests/outputs outputs
105103
106-
107-
end-to-end:
108-
name: End-to-end
109-
runs-on: ubuntu-latest
110-
env:
111-
DATA_DIR: tests/data
112-
MUJOCO_GL: egl
113-
steps:
114-
- uses: actions/checkout@v4
115-
with:
116-
lfs: true # Ensure LFS files are pulled
117-
118-
- name: Install apt dependencies
119-
# portaudio19-dev is needed to install pyaudio
120-
run: |
121-
sudo apt-get update && \
122-
sudo apt-get install -y libegl1-mesa-dev portaudio19-dev
123-
124-
- name: Install poetry
125-
run: |
126-
pipx install poetry && poetry config virtualenvs.in-project true
127-
echo "${{ github.workspace }}/.venv/bin" >> $GITHUB_PATH
128-
129-
- name: Set up Python 3.10
130-
uses: actions/setup-python@v5
131-
with:
132-
python-version: "3.10"
133-
cache: "poetry"
134-
135-
- name: Install poetry dependencies
136-
run: |
137-
poetry install --all-extras
138-
139-
- name: Test end-to-end
140-
run: |
141-
make test-end-to-end \
142-
&& rm -rf outputs
104+
# TODO(aliberts, rcadene): redesign after v2 migration / removing hydra
105+
# end-to-end:
106+
# name: End-to-end
107+
# runs-on: ubuntu-latest
108+
# env:
109+
# MUJOCO_GL: egl
110+
# steps:
111+
# - uses: actions/checkout@v4
112+
# with:
113+
# lfs: true # Ensure LFS files are pulled
114+
115+
# - name: Install apt dependencies
116+
# # portaudio19-dev is needed to install pyaudio
117+
# run: |
118+
# sudo apt-get update && \
119+
# sudo apt-get install -y libegl1-mesa-dev portaudio19-dev
120+
121+
# - name: Install poetry
122+
# run: |
123+
# pipx install poetry && poetry config virtualenvs.in-project true
124+
# echo "${{ github.workspace }}/.venv/bin" >> $GITHUB_PATH
125+
126+
# - name: Set up Python 3.10
127+
# uses: actions/setup-python@v5
128+
# with:
129+
# python-version: "3.10"
130+
# cache: "poetry"
131+
132+
# - name: Install poetry dependencies
133+
# run: |
134+
# poetry install --all-extras
135+
136+
# - name: Test end-to-end
137+
# run: |
138+
# make test-end-to-end \
139+
# && rm -rf outputs

.pre-commit-config.yaml

+4-4
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ default_language_version:
33
python: python3.10
44
repos:
55
- repo: https://github.com/pre-commit/pre-commit-hooks
6-
rev: v4.6.0
6+
rev: v5.0.0
77
hooks:
88
- id: check-added-large-files
99
- id: debug-statements
@@ -14,11 +14,11 @@ repos:
1414
- id: end-of-file-fixer
1515
- id: trailing-whitespace
1616
- repo: https://github.com/asottile/pyupgrade
17-
rev: v3.16.0
17+
rev: v3.19.0
1818
hooks:
1919
- id: pyupgrade
2020
- repo: https://github.com/astral-sh/ruff-pre-commit
21-
rev: v0.5.2
21+
rev: v0.8.2
2222
hooks:
2323
- id: ruff
2424
args: [--fix]
@@ -32,6 +32,6 @@ repos:
3232
- "--check"
3333
- "--no-update"
3434
- repo: https://github.com/gitleaks/gitleaks
35-
rev: v8.18.4
35+
rev: v8.21.2
3636
hooks:
3737
- id: gitleaks

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ We use `pytest` in order to run the tests. From the root of the
267267
repository, here's how to run tests with `pytest` for the library:
268268

269269
```bash
270-
DATA_DIR="tests/data" python -m pytest -sv ./tests
270+
python -m pytest -sv ./tests
271271
```
272272

273273

README.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,9 @@
5555

5656
<table>
5757
<tr>
58-
<td><img src="http://remicadene.com/assets/gif/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
59-
<td><img src="http://remicadene.com/assets/gif/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
60-
<td><img src="http://remicadene.com/assets/gif/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
58+
<td><img src="media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
59+
<td><img src="media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
60+
<td><img src="media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
6161
</tr>
6262
<tr>
6363
<td align="center">ACT policy on ALOHA env</td>
@@ -144,7 +144,7 @@ wandb login
144144

145145
### Visualize datasets
146146

147-
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically download data from the Hugging Face hub.
147+
Check out [example 1](./examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
148148

149149
You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
150150
```bash
@@ -153,10 +153,12 @@ python lerobot/scripts/visualize_dataset.py \
153153
--episode-index 0
154154
```
155155

156-
or from a dataset in a local folder with the root `DATA_DIR` environment variable (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
156+
or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
157157
```bash
158-
DATA_DIR='./my_local_data_dir' python lerobot/scripts/visualize_dataset.py \
158+
python lerobot/scripts/visualize_dataset.py \
159159
--repo-id lerobot/pusht \
160+
--root ./my_local_data_dir \
161+
--local-files-only 1 \
160162
--episode-index 0
161163
```
162164

@@ -208,12 +210,10 @@ dataset attributes:
208210

209211
A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
210212
- hf_dataset stored using Hugging Face datasets library serialization to parquet
211-
- videos are stored in mp4 format to save space or png files
212-
- episode_data_index saved using `safetensor` tensor serialization format
213-
- stats saved using `safetensor` tensor serialization format
214-
- info are saved using JSON
213+
- videos are stored in mp4 format to save space
214+
- metadata are stored in plain json/jsonl files
215215

216-
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can set the `DATA_DIR` environment variable to your root dataset folder as illustrated in the above section on dataset visualization.
216+
Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can use the `local_files_only` argument and specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
217217

218218
### Evaluate a pretrained policy
219219

@@ -280,7 +280,7 @@ To use wandb for logging training and evaluation curves, make sure you've run `w
280280
wandb.enable=true
281281
```
282282

283-
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explaination of some commonly used metrics in logs.
283+
A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.
284284

285285
![](media/wandb.png)
286286

benchmarks/video/run_video_benchmark.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ def benchmark_encoding_decoding(
266266
)
267267

268268
ep_num_images = dataset.episode_data_index["to"][0].item()
269-
width, height = tuple(dataset[0][dataset.camera_keys[0]].shape[-2:])
269+
width, height = tuple(dataset[0][dataset.meta.camera_keys[0]].shape[-2:])
270270
num_pixels = width * height
271271
video_size_bytes = video_path.stat().st_size
272272
images_size_bytes = get_directory_size(imgs_dir)

examples/10_use_so100.md

+3-8
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ You will need to move the follower arm to these positions sequentially:
135135
Make sure both arms are connected and run this script to launch manual calibration:
136136
```bash
137137
python lerobot/scripts/control_robot.py calibrate \
138-
--robot-path lerobot/configs/robot/moss.yaml \
138+
--robot-path lerobot/configs/robot/so100.yaml \
139139
--robot-overrides '~cameras' --arms main_follower
140140
```
141141

@@ -192,7 +192,6 @@ Record 2 episodes and upload your dataset to the hub:
192192
python lerobot/scripts/control_robot.py record \
193193
--robot-path lerobot/configs/robot/so100.yaml \
194194
--fps 30 \
195-
--root data \
196195
--repo-id ${HF_USER}/so100_test \
197196
--tags so100 tutorial \
198197
--warmup-time-s 5 \
@@ -212,18 +211,16 @@ echo ${HF_USER}/so100_test
212211
If you didn't upload with `--push-to-hub 0`, you can also visualize it locally with:
213212
```bash
214213
python lerobot/scripts/visualize_dataset_html.py \
215-
--root data \
216214
--repo-id ${HF_USER}/so100_test
217215
```
218216

219217
## Replay an episode
220218

221219
Now try to replay the first episode on your robot:
222220
```bash
223-
DATA_DIR=data python lerobot/scripts/control_robot.py replay \
221+
python lerobot/scripts/control_robot.py replay \
224222
--robot-path lerobot/configs/robot/so100.yaml \
225223
--fps 30 \
226-
--root data \
227224
--repo-id ${HF_USER}/so100_test \
228225
--episode 0
229226
```
@@ -232,7 +229,7 @@ DATA_DIR=data python lerobot/scripts/control_robot.py replay \
232229

233230
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
234231
```bash
235-
DATA_DIR=data python lerobot/scripts/train.py \
232+
python lerobot/scripts/train.py \
236233
dataset_repo_id=${HF_USER}/so100_test \
237234
policy=act_so100_real \
238235
env=so100_real \
@@ -248,7 +245,6 @@ Let's explain it:
248245
3. We provided an environment as argument with `env=so100_real`. This loads configurations from [`lerobot/configs/env/so100_real.yaml`](../lerobot/configs/env/so100_real.yaml).
249246
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
250247
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
251-
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
252248

253249
Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
254250

@@ -259,7 +255,6 @@ You can use the `record` function from [`lerobot/scripts/control_robot.py`](../l
259255
python lerobot/scripts/control_robot.py record \
260256
--robot-path lerobot/configs/robot/so100.yaml \
261257
--fps 30 \
262-
--root data \
263258
--repo-id ${HF_USER}/eval_act_so100_test \
264259
--tags so100 tutorial eval \
265260
--warmup-time-s 5 \

examples/11_use_moss.md

+2-7
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,6 @@ Record 2 episodes and upload your dataset to the hub:
192192
python lerobot/scripts/control_robot.py record \
193193
--robot-path lerobot/configs/robot/moss.yaml \
194194
--fps 30 \
195-
--root data \
196195
--repo-id ${HF_USER}/moss_test \
197196
--tags moss tutorial \
198197
--warmup-time-s 5 \
@@ -212,18 +211,16 @@ echo ${HF_USER}/moss_test
212211
If you didn't upload with `--push-to-hub 0`, you can also visualize it locally with:
213212
```bash
214213
python lerobot/scripts/visualize_dataset_html.py \
215-
--root data \
216214
--repo-id ${HF_USER}/moss_test
217215
```
218216

219217
## Replay an episode
220218

221219
Now try to replay the first episode on your robot:
222220
```bash
223-
DATA_DIR=data python lerobot/scripts/control_robot.py replay \
221+
python lerobot/scripts/control_robot.py replay \
224222
--robot-path lerobot/configs/robot/moss.yaml \
225223
--fps 30 \
226-
--root data \
227224
--repo-id ${HF_USER}/moss_test \
228225
--episode 0
229226
```
@@ -232,7 +229,7 @@ DATA_DIR=data python lerobot/scripts/control_robot.py replay \
232229

233230
To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
234231
```bash
235-
DATA_DIR=data python lerobot/scripts/train.py \
232+
python lerobot/scripts/train.py \
236233
dataset_repo_id=${HF_USER}/moss_test \
237234
policy=act_moss_real \
238235
env=moss_real \
@@ -248,7 +245,6 @@ Let's explain it:
248245
3. We provided an environment as argument with `env=moss_real`. This loads configurations from [`lerobot/configs/env/moss_real.yaml`](../lerobot/configs/env/moss_real.yaml).
249246
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
250247
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
251-
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.
252248

253249
Training should take several hours. You will find checkpoints in `outputs/train/act_moss_test/checkpoints`.
254250

@@ -259,7 +255,6 @@ You can use the `record` function from [`lerobot/scripts/control_robot.py`](../l
259255
python lerobot/scripts/control_robot.py record \
260256
--robot-path lerobot/configs/robot/moss.yaml \
261257
--fps 30 \
262-
--root data \
263258
--repo-id ${HF_USER}/eval_act_moss_test \
264259
--tags moss tutorial eval \
265260
--warmup-time-s 5 \

0 commit comments

Comments
 (0)