Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Can visualization markers retain the markers displayed at each step like debug_draw and provide corresponding methods to clear existing markers? #1653

Open
amatsukaze233 opened this issue Jan 9, 2025 · 2 comments
Labels
question Further information is requested

Comments

@amatsukaze233
Copy link

Question

Currently, I need to visualize at least 60,000 and increasing sphere markers and be able to record RGB information with a camera for reinforcement learning observation.

When using visualization markers, all marker coordinates need to be inputted at each step, and as the number of markers increases, the performance loss becomes greater.
QQ截图20250109144944

When using debug_draw, only the new marker coordinates need to be inputted at each step, making rendering relatively faster.
QQ截图20250109150305

However, the visual effect seen through the camera is very poor when using debug_draw.
QQ录屏20250109142156 00_00_00-00_00_30

I didn't quite understand the usage of point instancer in the source code.

@RandomOakForest
Copy link
Collaborator

Thank you for posting this. Could you help us reproduce this issue? could you share the relevant code you are using for this? Thanks.

@RandomOakForest RandomOakForest added the question Further information is requested label Jan 10, 2025
@amatsukaze233
Copy link
Author

amatsukaze233 commented Jan 13, 2025

Thank you for posting this. Could you help us reproduce this issue? could you share the relevant code you are using for this? Thanks.

Sure.Here is the revised code. The raycaster has been modified, but it should not affect reproduction. The robot has been replaced with Franka. To view from the camera's perspective, you need to hide the ee_marker and finger. The workpiece needs to be replaced with a mesh that includes rigid body and collision properties. If you switch to a rigid object, the visualization effect will not be as good.

  # Copyright (c) 2022-2024, The Isaac Lab Project Developers.
  # All rights reserved.
  #
  # SPDX-License-Identifier: BSD-3-Clause
  import argparse
  from omni.isaac.lab.app import AppLauncher
  parser = argparse.ArgumentParser(description="Tutorial on using the differential IK controller.")
  parser.add_argument("--num_envs", type=int, default=1, help="Number of environments to spawn.")
  AppLauncher.add_app_launcher_args(parser)
  args_cli = parser.parse_args()
  app_launcher = AppLauncher(args_cli)
  simulation_app = app_launcher.app

  import omni.isaac.lab.sim as sim_utils
  from omni.isaac.lab.assets import AssetBaseCfg,ArticulationCfg
  from omni.isaac.lab.controllers import DifferentialIKController, DifferentialIKControllerCfg
  from omni.isaac.lab.managers import SceneEntityCfg
  from omni.isaac.lab.assets import RigidObject, RigidObjectCfg
  from omni.isaac.lab.markers import VisualizationMarkers
  from omni.isaac.lab.scene import InteractiveScene, InteractiveSceneCfg
  from omni.isaac.lab.utils import configclass
  from omni.isaac.lab.utils.assets import ISAAC_NUCLEUS_DIR
  from omni.isaac.lab.utils.math import (subtract_frame_transforms,quat_mul,quat_from_euler_xyz,euler_xyz_from_quat,
  combine_frame_transforms,quat_from_angle_axis)
  from omni.isaac.lab.actuators.actuator_cfg import ImplicitActuatorCfg
  from omni.isaac.lab.sim.schemas import CollisionPropertiesCfg,define_collision_properties,modify_collision_properties
  from omni.isaac.lab.markers.visualization_markers import VisualizationMarkersCfg
  from omni.isaac.lab.sim.schemas import ArticulationRootPropertiesCfg,modify_articulation_root_properties
  import typing
  from omni.isaac.lab_assets import FRANKA_PANDA_HIGH_PD_CFG, UR10_CFG 
  from omni.isaac.lab.markers.config import FRAME_MARKER_CFG
  from pxr import Gf, Usd, UsdGeom,Sdf, UsdShade,UsdPhysics,Tf,PhysxSchema
  import omni.isaac.core.utils as prims_utils
  from omni.isaac.core.prims import XFormPrimView
  import omni.usd 
  import omni.kit 
  import omni.physx
  import omni.replicator.core as rep
  import omni.replicator.isaac as dr
  from omni.isaac.lab.sensors.ray_caster import RayCasterCfg, patterns
  from omni.isaac.lab.sensors import ContactSensorCfg,TiledCameraCfg,TiledCamera,CameraCfg,Camera
  import numpy as np
  from omni.isaac.debug_draw import _debug_draw
  import carb
  import random
  import torch


  def get_local_transform_xform(prim: Usd.Prim) -> typing.Tuple[Gf.Vec3d, Gf.Rotation]:#, Gf.Vec3d]:
      xform = UsdGeom.Xformable(prim)
      local_transformation: Gf.Matrix4d = xform.GetLocalTransformation()

      translation = local_transformation.ExtractTranslation()
      rotation= local_transformation.ExtractRotation()
      rotation=prims_utils.rotations.gf_rotation_to_np_array(rotation)
      #scale: Gf.Vec3d = Gf.Vec3d(*(v.GetLength() for v in local_transformation.ExtractRotationMatrix()))
      return translation, rotation#, scale


  # Pre-defined configs
  @configclass
  class TableTopSceneCfg(InteractiveSceneCfg):
  
      # ground plane
      ground = AssetBaseCfg(
          prim_path="/World/defaultGroundPlane",
          spawn=sim_utils.GroundPlaneCfg(color=(1,1,1)),
          init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0, 0.0,-1.05)),# 0.0)),#
      )

      # lights
      dome_light = AssetBaseCfg(
          prim_path="/World/Light", spawn=sim_utils.DomeLightCfg(intensity=2000.0, color=(0.85, 0.8, 0.8))
      )

      workpiece= AssetBaseCfg(
          prim_path="/World/workpiece",
          spawn=sim_utils.UsdFileCfg(
              usd_path="C:/Users/Administrator/IsaacLab/source/standalone/mydemo/data/cabinet.usd",
              scale=(0.2, 0.2, 0.2),
              visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(0.0, 0.0, 0.0),metallic=0.2),
              rigid_props=sim_utils.RigidBodyPropertiesCfg(rigid_body_enabled=False),
              #mass_props=sim_utils.MassPropertiesCfg(mass=1.0),
              collision_props=sim_utils.CollisionPropertiesCfg(collision_enabled=True),
              activate_contact_sensors=True,
              ),
              init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0,0.0,0.30),rot=(-0.28422,0.0,0.95876,0.0)),           
      )
  
      target1 = AssetBaseCfg(
          prim_path="/World/Target",
          spawn=sim_utils.UsdFileCfg(usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/UIElements/frame_prim.usd",
                                  visible=False, 
                                  ),
                                  init_state=AssetBaseCfg.InitialStateCfg(pos=(0.0,0.2,0.8)),
                                  debug_vis=False,                           
      )

      robot1 = FRANKA_PANDA_HIGH_PD_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot",
                                              init_state=ArticulationCfg.InitialStateCfg(pos=(-0.85,0,0),  ))
      ray_caster = RayCasterCfg(
          prim_path="{ENV_REGEX_NS}/Robot/panda_hand",
          update_period=1 / 30,
          offset=RayCasterCfg.OffsetCfg(pos=(0, 0, 0),
                                      rot=(0.707,0.0,-0.707,0.0)#tcp z
                                      ),
          mesh_prim_paths=["/World/workpiece"],
          attach_yaw_only=False,
          pattern_cfg=patterns.LidarPatternCfg(
              channels=50, vertical_fov_range=(-20, 20), horizontal_fov_range=(-20, 20), horizontal_res=0.8
          ),
          debug_vis=False,
          max_distance = 0.9,
      )

      camera = TiledCameraCfg(
          prim_path="{ENV_REGEX_NS}/Robot/panda_hand/front_cam",
          update_period=1/10,
          height=480,
          width=640,
          data_types=["rgb", "depth","normals"],
          spawn=sim_utils.PinholeCameraCfg(
              focal_length=24.0, focus_distance=0.1, horizontal_aperture=20.955, clipping_range=(0.1, 1.0e5)
          ),
          offset=CameraCfg.OffsetCfg(pos=(0.0, 0.0, 0.0), rot=(0.707,0.0,0.0,-0.707), convention="ros"),
      )


              
  def run_simulator(sim: sim_utils.SimulationContext, scene: InteractiveScene):
      """Runs the simulation loop."""

      table_asmarker_cfg = VisualizationMarkersCfg(
          markers={"table": sim_utils.UsdFileCfg(
          usd_path=f"{ISAAC_NUCLEUS_DIR}/Props/Mounts/Stand/stand_instanceable.usd",
          scale=(2.0, 2.0, 2.0) )})

      paint_marker_cfg = VisualizationMarkersCfg(
              markers={"paint": sim_utils.SphereCfg(
                                              radius = 0.01,
                                                  visual_material=sim_utils.PreviewSurfaceCfg(diffuse_color=(1.0, 0.0, 0.0)), )})
      paint_marker = VisualizationMarkers(paint_marker_cfg.replace(prim_path="/Visuals/paint"))  
                  
      table_asmarker=VisualizationMarkers(table_asmarker_cfg.replace(prim_path="/Visuals/table"))


      frame_marker_cfg = FRAME_MARKER_CFG.copy()
      frame_marker_cfg.markers["frame"].scale = (0.1, 0.1, 0.1)

      ee_marker1 = VisualizationMarkers(frame_marker_cfg.replace(prim_path="/Visuals/ee_current1"))

      robot1 = scene["robot1"]
      diff_ik_cfg1 = DifferentialIKControllerCfg(command_type="pose", use_relative_mode=False, ik_method="dls")
      diff_ik_controller1 = DifferentialIKController(diff_ik_cfg1, num_envs=scene.num_envs, device=sim.device)
      robot_entity_cfg1 = SceneEntityCfg("robot1", joint_names=["panda_joint.*"], body_names=["panda_hand"])
      robot_entity_cfg1.resolve(scene)
      ik_commands1 = torch.zeros(scene.num_envs, diff_ik_controller1.action_dim, device=robot1.device) 
      ee_jacobi_idx1 = robot_entity_cfg1.body_ids[0] - 1
      sim_dt = sim.get_physics_dt()
      count = 1
      joint_pos = robot1.data.default_joint_pos.clone()
      joint_vel = robot1.data.default_joint_vel.clone()
      robot1.write_joint_state_to_sim(joint_pos, joint_vel)
      robot1.reset()
      y_pos=-1.0
      root_pose_w = robot1.data.root_state_w[:, :].clone()

      all_sampled_data = torch.empty((0, 3),device=sim.device)
      triggered = False
      countdown = 10
      draw = _debug_draw.acquire_debug_draw_interface()
      colors = [(0.5, 0.0, 0.0, 1.0)]  
      sizes = [35.0] 

      while simulation_app.is_running():

          if count % 500 == 0:
              count = 0
              y_pos=-1.0
              triggered=False
              countdown = 42
              draw.clear_points()
              paint_marker.set_visibility(False)
  
              if all_sampled_data.numel() > 0:
                  #print(all_sampled_data)
                  print(all_sampled_data.shape)
                  all_sampled_data = torch.empty((0, 3),device=sim.device)

          #ik target
          translation, rotation =get_local_transform_xform (sim_utils.utils.find_first_matching_prim("/World/Target"))
          position=[translation[0], translation[1], translation[2]]
          orientation=[rotation[0],rotation[1],rotation[2],rotation[3]]
          target_tensor = torch.tensor([*position,*orientation],  device=sim.device)

          #ik
          ik_commands1[:] = target_tensor
          diff_ik_controller1.reset()
          diff_ik_controller1.set_command(ik_commands1)
          jacobian = robot1.root_physx_view.get_jacobians()[:, ee_jacobi_idx1, :, robot_entity_cfg1.joint_ids]
          ee_pose_w = robot1.data.body_state_w[:, robot_entity_cfg1.body_ids[0], 0:7]
          joint_pos = robot1.data.joint_pos[:, robot_entity_cfg1.joint_ids]
          ee_pos_b, ee_quat_b = subtract_frame_transforms(
          root_pose_w[:, 0:3], root_pose_w[:, 3:7], ee_pose_w[:, 0:3], ee_pose_w[:, 3:7])
          joint_pos_des = diff_ik_controller1.compute(ee_pos_b, ee_quat_b, jacobian, joint_pos)
          robot1.set_joint_position_target(joint_pos_des, joint_ids=robot_entity_cfg1.joint_ids)


          scene.write_data_to_sim()
          # perform step
          sim.step()
          # update sim-time
          count += 1
          # update buffers
          scene.update(sim_dt)
      

          ee_marker1.visualize(ee_pose_w[:, 0:3], ee_pose_w[:, 3:7])
          table_asmarker.visualize(root_pose_w[:, 0:3], root_pose_w[:, 3:7])


          rgb_data=scene["camera"].data.output["rgb"]
          print(f"rgb mean:{rgb_data.float().mean(dim=(1, 2))}")


          if not triggered:
              countdown -= 1
              if countdown < 0:
                  triggered=True
                  paint_marker.set_visibility(True) 
                  paint_marker.visualize(translations=torch.zeros(1,3,device=sim.device))  
          
          else:
              data=scene["ray_caster"].data.ray_hits_w[0] 
              inf_mask = torch.isinf(data).any(dim=1) 
              filtered_data = data[~inf_mask] 
              indices = torch.randperm(filtered_data.shape[0])[:200] 
              sampled_data = filtered_data[indices, :]
              #all_sampled_data[:, 1] += 0.005
              all_sampled_data = torch.cat((all_sampled_data, sampled_data), dim=0)


              # #This is the display method for visualization markers.
              # if all_sampled_data.numel() > 0:    
              #     paint_marker.visualize(translations=all_sampled_data)
              #     #paint_marker.set_visibility(True)

              #This is the display method for debug_draw.
              data_list = sampled_data.tolist()
              tuple_list = [tuple(item) for item in data_list]
              N=len(tuple_list)
              if N > 0:    
                  draw.draw_points(tuple_list, colors*N, sizes*N)


  def main():
      sim_cfg = sim_utils.SimulationCfg(dt=0.04)
      sim = sim_utils.SimulationContext(sim_cfg)
      sim.set_camera_view((-2.5, 2.5, 2.5), (0.0, 0.0, 0.0))
      scene_cfg = TableTopSceneCfg(num_envs=args_cli.num_envs, env_spacing=2.0)
      scene = InteractiveScene(scene_cfg)
      sim.reset()
      print("[INFO]: Setup complete...")
      run_simulator(sim, scene)


  if __name__ == "__main__":
      main()
      simulation_app.close()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants