Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Documentation on even more Domain Randomization #719

Merged
merged 8 commits into from
Nov 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ ManiSkill is a powerful unified framework for robot simulation and training powe
- GPU parallelized heterogeneous simulation, where every parallel environment has a completely different scene/set of objects
- Example tasks cover a wide range of different robot embodiments (humanoids, mobile manipulators, single-arm robots) as well as a wide range of different tasks (table-top, drawing/cleaning, dextrous manipulation)
- Flexible and simple task building API that abstracts away much of the complex GPU memory management code via an object oriented design
- Real2sim environments for scalably evaluating real-world policies 60-100x faster via GPU simulation.
- Real2sim environments for scalably evaluating real-world policies 100x faster via GPU simulation.

<!-- TODO replace paper link with arxiv link when it is out -->
For more details we encourage you to take a look at our [paper](https://arxiv.org/abs/2410.00425).
Expand Down
Binary file added docs/source/_static/videos/dr_lighting.mp4
Binary file not shown.
16 changes: 8 additions & 8 deletions docs/source/user_guide/concepts/sensors.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,13 @@ These specific customizations can be useful for those looking to customize how t

The following shaders are available in ManiSkill:

| Shader Name | Available Textures | Description |
|-------------|--------------------------------------------------------|----------------------------------------------------------------------------------|
| minimal | rgb, depth, position, segmentation | The fastest shader with minimal GPU memory usage |
| default | rgb, depth, position, segmentation, normal, albedo | A balance between speed and texture availability |
| rt | rgb, depth, position, segmentation, normal, albedo | A shader optimized for photo-realistic rendering via ray-tracing |
| rt-med | rgb, depth, position, segmentation, normal, albedo | Same as rt but runs faster with slightly lower quality |
| rt-fast | rgb, depth, position, segmentation, normal, albedo | Same as rt-med but runs faster with slightly lower quality |
| Shader Name | Available Textures | Description |
| ----------- | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| minimal | rgb, depth, position, segmentation | The fastest shader with minimal GPU memory usage. Note that the background will always be black (normally it is the color of the ambient light) |
| default | rgb, depth, position, segmentation, normal, albedo | A balance between speed and texture availability |
| rt | rgb, depth, position, segmentation, normal, albedo | A shader optimized for photo-realistic rendering via ray-tracing |
| rt-med | rgb, depth, position, segmentation, normal, albedo | Same as rt but runs faster with slightly lower quality |
| rt-fast | rgb, depth, position, segmentation, normal, albedo | Same as rt-med but runs faster with slightly lower quality |



Expand All @@ -57,7 +57,7 @@ The following textures are available in ManiSkill. Note all data is not scaled/n
|---------|-------|-------|-------------|
| rgb | [H, W, 3] | torch.uint8 | Red, Green, Blue colors of the image. Range of 0-255 |
| depth | [H, W, 1] | torch.int16 | Depth in millimeters |
| position | [H, W, 4] | torch.int16 | x, y, z, and segmentation in millimeters |
| position | [H, W, 4] | torch.int16 | x, y, z in millimeters and 4th channel is same as segmentation below |
| segmentation | [H, W, 1] | torch.int16 | Segmentation mask with unique integer IDs for each object |
| normal | [H, W, 3] | torch.float32 | x, y, z components of the normal vector |
| albedo | [H, W, 3] | torch.uint8 | Red, Green, Blue colors of the albedo. Range of 0-255 |
Expand Down
4 changes: 2 additions & 2 deletions docs/source/user_guide/tutorials/custom_robots.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,14 +62,14 @@ Note that there are a number of common issues users may face (often due to incor

ManiSkill supports importing [Mujoco's MJCF format](https://mujoco.readthedocs.io/en/latest/modeling.html) of files to load robots (and other objects), although not all features are supported.

For example code that loads the robot and the scene see https://github.com/haosulab/ManiSkill/blob/main/mani_skill/envs/tasks/control/cartpole.py
For example code that loads the robot and the scene see https://github.com/haosulab/ManiSkill/blob/main/mani_skill/envs/tasks/control/cartpole.py. Generally you can simply replace the `urdf_path` property of the agent used in URDF based agents with `mjcf_path` property and it will use the MJCF loader instead.


At the moment, the following are not supported:
- Procedural texture generation
- Importing motors and solver configurations
- Correct use of contype and conaffinity attributes (contype of 0 means no collision mesh is added, otherwise it is always added)
- Correct use of groups (at the moment anything in group 0 and 2 can be seen, other groups are hidden all the time)
- Correct use of groups (at the moment anything in group 0 and 2 can be seen by default, other groups are hidden all the time)

These may be supported in the future so stay tuned for updates.

Expand Down
21 changes: 1 addition & 20 deletions docs/source/user_guide/tutorials/custom_tasks/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -395,23 +395,4 @@ However as a user you don't need to worry about adding a plane collision in each

## Dynamically Updating Simulation Properties

The GPU simulation has some restrictions on what can be updated after reconfiguration. At the moment you cannot modify the number of robots or add/remove objects after reconfiguration. You can however modify properties such as object textures and controller properties on the fly.


### Texture Modification

Currently there is not a general API for "batched" material modifications but you can modify each of the objects managed by an {py:class}`mani_skill.utils.structs.Actor` or {py:class}`mani_skill.utils.structs.Articulation` object in the `._objs` list property. For each of those objects you can directly change the attached RenderBodyComponent and replace with a different one to change the material.

### Controller Modification

To modify controller properties on the fly, you can directly modify the controller configuration and call `set_drive_property` to apply the changes. For the controllers that come out of the box with ManiSkill there are some configuration parameters like stiffness and damping (PD parameters) that need to be applied before it takes effect since the modify properties of the articulation/robot, which is done by `set_drive_property`.

For example if you had an environment `env` and are using the `"panda"` robot, and if you wanted to change the stiffness of the controller, you can do the following:

```python
# find the controller object to modify. The panda robot's pd_joint_delta_pos controller
# has multiple sub-controllers, one for the "arm" and one for the "gripper".
controller = env.agent.controllers["pd_joint_delta_pos"].controllers["arm"]
controller.config.stiffness = 100
controller.set_drive_property()
```
See the page on [domain randomization](../domain_randomization.md) for more information on modifying various simulation (visual/physical) properties on the fly.
22 changes: 19 additions & 3 deletions docs/source/user_guide/tutorials/custom_tasks/loading_objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ def _load_scene(self, options):
builder = self.scene.create_actor_builder()
```

Then you can use the standard SAPIEN API for creating actors, a tutorial of which can be found on the [SAPIEN actors tutorial documentation](https://sapien.ucsd.edu/docs/latest/tutorial/basic/create_actors.html)
Then you can use the standard SAPIEN API for creating actors, a tutorial of which can be found on the [SAPIEN actors tutorial documentation](https://sapien-sim.github.io/docs/user_guide/getting_started/create_actors.html#create-an-actor-with-actorbuilder)

## Loading Articulations

There are several ways to load articulations as detailed below.
There are several ways to load articulations as detailed below as well as some limitations to be aware of

### Loading from Existing Datasets

Expand Down Expand Up @@ -64,7 +64,7 @@ def _load_scene(self, options):
builder = self.scene.create_articulation_builder()
```

Then you can use the standard SAPIEN API for creating articulations, a tutorial of which can be found on the [SAPIEN articulation tutorial documentation](https://sapien.ucsd.edu/docs/latest/tutorial/basic/create_articulations.html). You essentially just need to define what the links and joints are and how they connect. Links are created like Actors and can have visual and collision shapes added via the python API.
Then you can use the standard SAPIEN API for creating articulations, a tutorial of which can be found on the [SAPIEN articulation tutorial documentation](https://sapien-sim.github.io/docs/user_guide/getting_started/create_articulations.html). You essentially just need to define what the links and joints are and how they connect. Links are created like Actors and can have visual and collision shapes added via the python API.

### Using the URDF Loader

Expand Down Expand Up @@ -113,6 +113,22 @@ def _load_scene(self, options):
builder.build(name="my_articulation")
```

### Articulation Limitations

For the physx simulation backend, any single articulation can have a maximum of 64 links. More complex articulated objects will either need to be simplified by merging links together. Most of the time this is readily possible by inspecting the URDF and fusing together links held together by fixed joints. The less fixed joints and linsk there are, the better the simulation will run in terms of accuracy and speed.

## Using the MJCF Loader

If your actor/articulation is defined with a MJCF file, you can use a MJCF loader to load that articulation and make modifications as needed. It works the exact same as the [URDF loader](./loading_objects.md#using-the-urdf-loader). Note that however not all properties in MJCF/Mujoco are supported in SAPIEN/ManiSkill at this moment, so you should always verify your articulation/actors are loaded correctly from the MJCF.

```python
def _load_scene(self, options):
loader = scene.create_mjcf_loader()
builders = loader.parse(str(mjcf_path))
articulation_builders = builders["articulation_builders"]
actor_builders = builders["actor_builders"]
```

## Reconfiguring and Optimization

In general loading is always quite slow, especially on the GPU so by default, ManiSkill reconfigures just once. Any call to `env.reset()` will not trigger a reconfiguration unless you call `env.reset(seed=seed, options=dict(reconfigure=True))` (seed is not needed but recommended if you are reconfiguring for reproducibility).
Expand Down
Loading