You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| <imgsrc="../media/so100/follower_zero.webp?raw=true"alt="SO-100 follower arm zero position"title="SO-100 follower arm zero position"style="width:100%;"> | <imgsrc="../media/so100/follower_rotated.webp?raw=true"alt="SO-100 follower arm rotated position"title="SO-100 follower arm rotated position"style="width:100%;"> | <imgsrc="../media/so100/follower_rest.webp?raw=true"alt="SO-100 follower arm rest position"title="SO-100 follower arm rest position"style="width:100%;"> |
460
460
461
461
Make sure both arms are connected and run this script to launch manual calibration:
Follow step 6 of the [assembly video](https://youtu.be/FioA2oeFZ5I?t=724) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
472
472
473
-
| 1. Zero position | 2. Rotated position | 3. Rest position |
474
-
|---|---|---|
473
+
| 1. Zero position | 2. Rotated position | 3. Rest position|
| <imgsrc="../media/so100/leader_zero.webp?raw=true"alt="SO-100 leader arm zero position"title="SO-100 leader arm zero position"style="width:100%;"> | <imgsrc="../media/so100/leader_rotated.webp?raw=true"alt="SO-100 leader arm rotated position"title="SO-100 leader arm rotated position"style="width:100%;"> | <imgsrc="../media/so100/leader_rest.webp?raw=true"alt="SO-100 leader arm rest position"title="SO-100 leader arm rest position"style="width:100%;"> |
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so100_test`.
580
580
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
581
-
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you could use `device=mps` to train on Apple silicon.
581
+
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
582
582
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
583
583
584
584
Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.
| <imgsrc="../media/lekiwi/mobile_calib_zero.webp?raw=true"alt="SO-100 follower arm zero position"title="SO-100 follower arm zero position"style="width:100%;"> | <imgsrc="../media/lekiwi/mobile_calib_rotated.webp?raw=true"alt="SO-100 follower arm rotated position"title="SO-100 follower arm rotated position"style="width:100%;"> | <imgsrc="../media/lekiwi/mobile_calib_rest.webp?raw=true"alt="SO-100 follower arm rest position"title="SO-100 follower arm rest position"style="width:100%;"> |
372
372
373
373
Make sure the arm is connected to the Raspberry Pi and run this script (on the Raspberry Pi) to launch manual calibration:
@@ -385,8 +385,8 @@ If you have the **wired** LeKiwi version please run all commands including this
385
385
### Calibrate leader arm
386
386
Then to calibrate the leader arm (which is attached to the laptop/pc). You will need to move the leader arm to these positions sequentially:
387
387
388
-
| 1. Zero position | 2. Rotated position | 3. Rest position |
389
-
|---|---|---|
388
+
| 1. Zero position | 2. Rotated position | 3. Rest position|
| <imgsrc="../media/so100/leader_zero.webp?raw=true"alt="SO-100 leader arm zero position"title="SO-100 leader arm zero position"style="width:100%;"> | <imgsrc="../media/so100/leader_rotated.webp?raw=true"alt="SO-100 leader arm rotated position"title="SO-100 leader arm rotated position"style="width:100%;"> | <imgsrc="../media/so100/leader_rest.webp?raw=true"alt="SO-100 leader arm rest position"title="SO-100 leader arm rest position"style="width:100%;"> |
391
391
392
392
Run this script (on your laptop/pc) to launch manual calibration:
You should see on your laptop something like this: ```[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.``` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
> If you use a different keyboard you can change the keys for each command in the [`LeKiwiRobotConfig`](../lerobot/common/robot_devices/robots/configs.py).
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/lekiwi_test`.
558
558
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
559
-
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you could use `device=mps` to train on Apple silicon.
559
+
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
560
560
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
561
561
562
562
Training should take several hours. You will find checkpoints in `outputs/train/act_lekiwi_test/checkpoints`.
| <imgsrc="../media/moss/follower_zero.webp?raw=true"alt="Moss v1 follower arm zero position"title="Moss v1 follower arm zero position"style="width:100%;"> | <imgsrc="../media/moss/follower_rotated.webp?raw=true"alt="Moss v1 follower arm rotated position"title="Moss v1 follower arm rotated position"style="width:100%;"> | <imgsrc="../media/moss/follower_rest.webp?raw=true"alt="Moss v1 follower arm rest position"title="Moss v1 follower arm rest position"style="width:100%;"> |
182
182
183
183
Make sure both arms are connected and run this script to launch manual calibration:
Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
194
194
195
-
| 1. Zero position | 2. Rotated position | 3. Rest position |
196
-
|---|---|---|
195
+
| 1. Zero position | 2. Rotated position | 3. Rest position|
| <imgsrc="../media/moss/leader_zero.webp?raw=true"alt="Moss v1 leader arm zero position"title="Moss v1 leader arm zero position"style="width:100%;"> | <imgsrc="../media/moss/leader_rotated.webp?raw=true"alt="Moss v1 leader arm rotated position"title="Moss v1 leader arm rotated position"style="width:100%;"> | <imgsrc="../media/moss/leader_rest.webp?raw=true"alt="Moss v1 leader arm rest position"title="Moss v1 leader arm rest position"style="width:100%;"> |
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/moss_test`.
302
302
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
303
-
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you could use `device=mps` to train on Apple silicon.
303
+
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
304
304
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
305
305
306
306
Training should take several hours. You will find checkpoints in `outputs/train/act_moss_test/checkpoints`.
Copy file name to clipboardexpand all lines: examples/4_train_policy_with_script.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
This tutorial will explain the training script, how to use it, and particularly how to configure everything needed for the training run.
2
-
> **Note:** The following assume you're running these commands on a machine equipped with a cuda GPU. If you don't have one (or if you're using a Mac), you can add `--device=cpu` (`--device=mps` respectively). However, be advised that the code executes much slower on cpu.
2
+
> **Note:** The following assume you're running these commands on a machine equipped with a cuda GPU. If you don't have one (or if you're using a Mac), you can add `--policy.device=cpu` (`--policy.device=mps` respectively). However, be advised that the code executes much slower on cpu.
0 commit comments