diff --git a/examples/generate_image_dataset/README.md b/examples/generate_image_dataset/README.md index e79d2df53..2c2979bfe 100644 --- a/examples/generate_image_dataset/README.md +++ b/examples/generate_image_dataset/README.md @@ -28,7 +28,6 @@ Running `generate_poses.py` will generate a `poses.csv` file consisting of camer Running `generate_images.py` will generate images in an `images` directory. This tool accepts several optional command-line arguments that can be used to control its behavior (see the source code for details), e.g., - `--poses_file` can be used to generate images based on the camera poses in a specific CSV file. - - `--rendering_mode` can be set to `baked` to use baked lighting, or `raytracing` for ray-traced lighting if you are running on Windows and you have a GPU that supports DirectX ray-tracing. - `--num_internal_steps` can be used to control the image quality when running in ray-traced mode. - `--benchmark` can be used to test the overall speed of the simulation. - `--wait_for_key_press` can be used to compare the game window output to the image that has been saved to disk. diff --git a/examples/imitation_learning_openbot/README.md b/examples/imitation_learning_openbot/README.md index 1c27f4341..2067f5052 100644 --- a/examples/imitation_learning_openbot/README.md +++ b/examples/imitation_learning_openbot/README.md @@ -1,4 +1,4 @@ -# Imitation learning with OpenBot +# Imitation Learning with OpenBot In this example application, we demonstrate how to collect training data that can be plugged into the [OpenBot](http://www.openbot.org) framework and used to train a navigation policy. @@ -36,7 +36,6 @@ Running `generate_episodes.py` will generate navigation episodes and store them Running `generate_dataset.py` will generate a dataset of goals, observations, and actions for each episode. The structure of the generated dataset precisely mimics the structure expected by the OpenBot framework, and can therefore be plugged into the OpenBot training code directly. This tool accepts several optional command-line arguments that can be used to control its behavior (see the source code for details), e.g., - `--episodes_file` can be used to read episodes from a specific CSV file. - - `--rendering_mode` can be set to `baked` to use baked lighting, or `raytracing` for ray-traced lighting if you are running on Windows and you have a GPU that supports DirectX ray-tracing. - `--create_videos` can be used to generate videos from OpenBot observations. If you use this optional argument, the `ffmpeg` command-line tool must be visible on your path. - `--benchmark` can be used to test the overall speed of the simulation. diff --git a/examples/imitation_learning_openbot/user_config.yaml.example b/examples/imitation_learning_openbot/user_config.yaml.example index 0890daaee..5218c6425 100644 --- a/examples/imitation_learning_openbot/user_config.yaml.example +++ b/examples/imitation_learning_openbot/user_config.yaml.example @@ -33,6 +33,10 @@ SIMULATION_CONTROLLER: SIMULATION_STEP_TIME_SECONDS: 0.033 MAX_SUBSTEP_DELTA_TIME: 0.001 + MAX_SUBSTEPS: 100 + CONTACT_OFFSET_MULTIPLIER: 0.01 + MIN_CONTACT_OFFSET: 0.0001 + MAX_CONTACT_OFFSET: 1.0 OPENBOT_AGENT: OPENBOT_ACTOR_NAME: "OpenBotActor" diff --git a/examples/open_loop_control_fetch/README.md b/examples/open_loop_control_fetch/README.md index 74519fb58..8e62c1209 100644 --- a/examples/open_loop_control_fetch/README.md +++ b/examples/open_loop_control_fetch/README.md @@ -1,14 +1,35 @@ -# Open Loop Control with UrdfBot +# Open-Loop Control with Fetch -In this example application, we demonstrate how to load a Urdf agent and apply simple actions such as base motion, arm motion and gripper grasping. +In this example application, we demonstrate how control a Fetch agent to pick up an object and move it to another location. -Before running this example, rename `user_config.yaml.example` to `user_config.yaml` and modify the contents appropriately for your system, as described in our top-level [README](http://github.com/isl-org/spear). +Before running this example, rename `user_config.yaml.example` to `user_config.yaml` and modify the contents appropriately for your system, as described in our [Getting Started](../../docs/getting_started.md) tutorial. -### Running the example +### Important configuration options + +You can control the behavior of this example by setting the following parameters in your user_config.yaml file, e.g., + - `SPEAR.PAKS_DIR` is the directory containing scene data in the form of PAK files. + - `SIMULATION_CONTROLLER.OPENBOT_AGENT.CAMERA.RENDER_PASSES` can be set to a list of image modalities that you want the agent to return (e.g., setting the value `["depth", "final_color", "segmentation"]` will return depth images, photorealistic RGB images, and segmentation images). + +Your `user_config.yaml` file only needs to specify the value of a parameter if it differs from the defaults defined in the `python/config` directory. You can browse this directory for a complete set of all user-configurable parameters. + +### Running the example + +You can run the example as follows. ```console -python run.py --actions_file=actions.csv --observation_file=observations.csv +# generate actions +python generate_actions.py + +# execute actions +python run.py ``` -- `actions_file`: preconfigured joint actions. -- `observation_file`: store received link_position observation. \ No newline at end of file +Running `generate_actions.py` will generate an `actions.csv` file consisting of actions that will be used in the following step. This tool accepts several optional command-line arguments that can be used to control its behavior (see the source code for details), e.g., + + - `--scene_id` can be used to specify which scene to generate actions for. + +Running `run.py` will execute the previously generated actions in an open-loop fashion on the Fetch agent. This tool accepts several optional command-line arguments that can be used to control its behavior (see the source code for details), e.g., + + - `--action_file` can be used to specify which actions to load. + - `--scene_id` can be used to specify which scene to load. + - `--benchmark` can be used to test the overall speed of the simulation. diff --git a/examples/open_loop_control_fetch/user_config.yaml.example b/examples/open_loop_control_fetch/user_config.yaml.example index 26e27847c..8ab49ac91 100644 --- a/examples/open_loop_control_fetch/user_config.yaml.example +++ b/examples/open_loop_control_fetch/user_config.yaml.example @@ -20,6 +20,10 @@ SIMULATION_CONTROLLER: CUSTOM_UNREAL_CONSOLE_COMMANDS: [] MAX_SUBSTEP_DELTA_TIME: 0.001 + MAX_SUBSTEPS: 100 + CONTACT_OFFSET_MULTIPLIER: 0.01 + MIN_CONTACT_OFFSET: 0.0001 + MAX_CONTACT_OFFSET: 1.0 URDFBOT_AGENT: URDFBOT_ACTOR_NAME: "UrdfBotActor"