Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can we use keyframes to Initialize the panda robot arm posture. #868

Open
YZY14606 opened this issue Feb 21, 2025 · 12 comments
Open

How can we use keyframes to Initialize the panda robot arm posture. #868

YZY14606 opened this issue Feb 21, 2025 · 12 comments

Comments

@YZY14606
Copy link

In previous discussions, it was mentioned that setting the statement self.agent.robot.set_qpos(self.agent.keyframes["rest"]) under the def _initialize_episode function can initialize the Panda robot's pose. However, when I run this program, I encounter many errors and can't Initialize the pose successfully.
Is this direct method correct? What can I do to use keyframes to Initialize the panda robot arm posture?

@StoneT2000
Copy link
Member

This method is correct. Do you have a script or an example image of what happened? I just tried this myself and it looks fine for me.

@YZY14606
Copy link
Author

Thanks for your help! There is my def _initialize_episode code:

def _initialize_episode(self, env_idx: torch.Tensor, options: dict):
        # use the torch.device context manager to automatically create tensors on CPU or CUDA depending on self.device, the device the environment runs on
        with torch.device(self.device):
            # the initialization functions where you as a user place all the objects and initialize their properties
            # are designed to support partial resets, where you generate initial state for a subset of the environments.
            # this is done by using the env_idx variable, which also tells you the batch size
            b = len(env_idx)
            # when using scene builders, you must always call .initialize on them so they can set the correct poses of objects in the prebuilt scene
            # note that the table scene is built such that z=0 is the surface of the table.
            self.table_scene.initialize(env_idx)

            # here we write some randomization code that randomizes the x, y position of the cube we are pushing in the range [-0.1, -0.1] to [0.1, 0.1]
            xyz = torch.zeros((b, 3))
            xyz[..., :2] = torch.rand((b, 2)) * 0.2 - 0.1
            xyz[..., 2] = 0.001
            q = euler2quat(np.pi / 2, 0, 0)
            # we can then create a pose object using Pose.create_from_pq to then set the cube pose with. Note that even though our quaternion
            # is not batched, Pose.create_from_pq will automatically batch p or q accordingly
            # furthermore, notice how here we do not even use env_idx as a variable to say set the pose for objects in desired
            # environments. This is because internally any calls to set data on the GPU buffer (e.g. set_pose, set_linear_velocity etc.)
            # automatically are masked so that you can only set data on objects in environments that are meant to be initialized
            nxyz = torch.zeros((b, 3))
            nxyz = xyz + torch.tensor([0.15, 0.3, 0])
            obj_pose = Pose.create_from_pq(nxyz, q=q)
            self.obj.set_pose(obj_pose)

            # here we set the location of that red/white target (the goal region). In particular here, we set the position to be in front of the cube
            # and we further rotate 90 degrees on the y-axis to make the target object face up
            target_region_xyz = xyz + torch.tensor([0.1 + self.goal_radius, 0, 0])
            # set a little bit above 0 so the target is sitting on the table
            target_region_xyz[..., 2] = 1e-3
            self.goal_region.set_pose(
                Pose.create_from_pq(
                    p=target_region_xyz,
                    q=euler2quat(0, np.pi / 2, 0),
                )
            )
            self.agent.robot.set_qpos(self.agent.keyframes["rest"])

The code in the script "panda" is identical to the original code; I have not made any modifications.

The following are my error messages:

TypeError: (): incompatible function arguments. The following argument types are supported:
    1. (arg0: sapien.pysapien.physx.PhysxArticulation, arg1: numpy.ndarray[numpy.float32[m, 1]]) -> None

Invoked with: <sapien.pysapien.physx.PhysxArticulation object at 0x7fbc09af74f0>, array(Keyframe(pose=Pose([0, 0, 0], [1, 0, 0, 0]), qpos=array([ 0.        ,  0.39269908,  0.        , -1.96349541,  0.        ,
        2.35619449,  0.78539816,  0.04      ,  0.04      ]), qvel=None),
      dtype=object)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/yzy/nus/robot/ceshi.py", line 167, in <module>
    env = gym.make(
          ^^^^^^^^^
  File "/home/yzy/miniconda3/envs/nus_robot/lib/python3.12/site-packages/gymnasium/envs/registration.py", line 814, in make
    raise type(e)(
TypeError: (): incompatible function arguments. The following argument types are supported:
    1. (arg0: sapien.pysapien.physx.PhysxArticulation, arg1: numpy.ndarray[numpy.float32[m, 1]]) -> None

Invoked with: <sapien.pysapien.physx.PhysxArticulation object at 0x7fbc09af74f0>, array(Keyframe(pose=Pose([0, 0, 0], [1, 0, 0, 0]), qpos=array([ 0.        ,  0.39269908,  0.        , -1.96349541,  0.        ,
        2.35619449,  0.78539816,  0.04      ,  0.04      ]), qvel=None),
      dtype=object) was raised from the environment creator for PushCup-yzy with kwargs ({'num_envs': 1, 'obs_mode': 'pointcloud', 'control_mode': 'pd_ee_delta_pose', 'render_mode': 'human'})

@StoneT2000
Copy link
Member

a keyframe object is a dataclass consisting of a root pose for the root link and the qpos. Do self.agent.robot.set_qpos(self.agent.keyframes["rest"].qpos)

@YZY14606
Copy link
Author

YZY14606 commented Feb 23, 2025

Thanks for your help. But why are the robot's grippers not in a closed state when I set the last two dimensions of qpos in keyframes to 0? And no matter how I modify the initial values, I cannot initialize the grippers of the Panda robot in a closed state.

@StoneT2000
Copy link
Member

Did you take an action perhaps?

@YZY14606
Copy link
Author

Yes. But I believe the actions in my simulator should not affect the closure of the grippers.
My control mode is set to control_mode="pd_ee_delta_pose", and the action has seven dimensions. I set the last four dimensions of the action to 0. In this control mode, the last dimension of the action should control the grippers movement, right? I set it to 0 as well, but the grippers still cannot close.

@YZY14606
Copy link
Author

Meanwhile, is there a way for us to initialize the Panda robot's initial pose by defining the end-effector pose?I saw some code in another discussion:

arm_ee_delta_pose_controller = self.agent.controller.controllers['arm']
            arm_target_qpos = arm_ee_delta_pose_controller.kinematics.compute_ik(
                target_pose= Pose.create_from_pq(p=[0,0,0.1],q=[1,0,0,0]),
                q0=arm_ee_delta_pose_controller.articulation.get_qpos(),
            ). 

It use pytorch_kinematics and cause the simulation to run slower. Is there a solution now?

@StoneT2000
Copy link
Member

Yes. But I believe the actions in my simulator should not affect the closure of the grippers. My control mode is set to control_mode="pd_ee_delta_pose", and the action has seven dimensions. I set the last four dimensions of the action to 0. In this control mode, the last dimension of the action should control the grippers movement, right? I set it to 0 as well, but the grippers still cannot close.

you should the last dimension to either -1 or 1 to open/close (i forget which is which)

@StoneT2000
Copy link
Member

Meanwhile, is there a way for us to initialize the Panda robot's initial pose by defining the end-effector pose?I saw some code in another discussion:

arm_ee_delta_pose_controller = self.agent.controller.controllers['arm']
            arm_target_qpos = arm_ee_delta_pose_controller.kinematics.compute_ik(
                target_pose= Pose.create_from_pq(p=[0,0,0.1],q=[1,0,0,0]),
                q0=arm_ee_delta_pose_controller.articulation.get_qpos(),
            ). 

It use pytorch_kinematics and cause the simulation to run slower. Is there a solution now?

overall it shouldn't be that slow since this is only during episode initialization?

compute_ik was tuned by default for accuracy to align with the CPU simulation more closely (the pinnochio CLIK solver uses quite a few iterations). It is possible to lower the number of optimization iterations but I did not expose that API in the compute_ik function. I presume you are using the GPU simulation? If so I can maybe expose the API to enable user to modify the ik configs to run faster but less accurate.

@YZY14606
Copy link
Author

Yes. But I believe the actions in my simulator should not affect the closure of the grippers. My control mode is set to control_mode="pd_ee_delta_pose", and the action has seven dimensions. I set the last four dimensions of the action to 0. In this control mode, the last dimension of the action should control the grippers movement, right? I set it to 0 as well, but the grippers still cannot close.

you should the last dimension to either -1 or 1 to open/close (i forget which is which)

Okay, thanks for your replay!

@YZY14606
Copy link
Author

Meanwhile, is there a way for us to initialize the Panda robot's initial pose by defining the end-effector pose?I saw some code in another discussion:

arm_ee_delta_pose_controller = self.agent.controller.controllers['arm']
            arm_target_qpos = arm_ee_delta_pose_controller.kinematics.compute_ik(
                target_pose= Pose.create_from_pq(p=[0,0,0.1],q=[1,0,0,0]),
                q0=arm_ee_delta_pose_controller.articulation.get_qpos(),
            ). 

It use pytorch_kinematics and cause the simulation to run slower. Is there a solution now?

overall it shouldn't be that slow since this is only during episode initialization?

compute_ik was tuned by default for accuracy to align with the CPU simulation more closely (the pinnochio CLIK solver uses quite a few iterations). It is possible to lower the number of optimization iterations but I did not expose that API in the compute_ik function. I presume you are using the GPU simulation? If so I can maybe expose the API to enable user to modify the ik configs to run faster but less accurate.

I previously used CPU simulation, and this function worked. I am just curious about the performance of this function on the GPU. Thanks very much for your detailed replay.

@YZY14606
Copy link
Author

YZY14606 commented Feb 26, 2025

However, I have recently encountered some issues when using GPU simulation for robot pose initialization. The problems are as fellows.

  1. I used the following code to initialize the robot pose:
            panda_kinematic = kinematics.Kinematics(urdf_path = f"{PACKAGE_ASSET_DIR}/robots/panda/panda_v2.urdf",
                                end_link_name = "panda_hand_tcp",
                                articulation = self.agent.robot,
                                active_joint_indices = torch.tensor([0,1,2,3,4,5,6])#There is a "[ ]"
            )
            arm_target_qpos = panda_kinematic.compute_ik(
                target_pose= Pose.create_from_pq(p=robot_ee_p,q=robot_ee_q),
                q0=self.agent.robot.get_qpos(),
            )
            gipper_qpose = torch.zeros(b, 2) - 1
            new_tensor = torch.cat((arm_target_qpos, gipper_qpose),dim=1)
            self.agent.robot.set_qpos(new_tensor)

However, when I run the code, the simulator throws the following error:

    raise ValueError(
ValueError: initial_config must have shape (1, 1, 7) or (1, 7)
  1. To deal with this error, I modify the code active_joint_indices = torch.tensor([0,1,2,3,4,5,6]) to active_joint_indices = torch.tensor([[0,1,2,3,4,5,6]]) #There are two "[ ]".
    However, the simulator will throw a error RuntimeError: Boolean value of Tensor with more than one value is ambiguous at this time. I am currently very confused about using this function compute_ik for robot initialization at GPU. What can we do to solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants