This example demonstrates how to compute extrinsic parameters (pose of the camera) for multiple cameras.
key | action |
---|---|
1 ... 9 |
select camera |
q |
quit |
p |
compute the pose of the selected camera and save the results |
Properties of the checkerboard used for calibration and the calibration data directory can be changed in the config.py
.
Run the main.py
with Python 3.
Measure the pose of the camera
Select the camera. Press the p
key to estimate the pose of the camera. An overlay showing the coordinate system will appear. To dismiss the overlay press any key. The pose of the camera will be saved to a file (calibration_data/extrinsics_<camera_mxid>
by default).
Repeat for every camera.
Open CV's findChessboardCorners
is used to find the checkerboard and solvePnP
is used to compute the translation and rotation from the camera to the checkerboard.
The results can be read with Numpy's load function:
import numpy as np
extrinsics = np.load("calibration_data/extrinsics_19443010F1CCF41200.npz")
print("World to cam: \n", extrinsics["world_to_cam"])
print("Cam to world: \n", extrinsics["cam_to_world"])
print("Rotation vector: \n", extrinsics["rot_vec"])
print("Translation vector: \n", extrinsics["trans_vec"])
To transform a point from the camera coordinate system to the world coordinate system or vise versa simply multiply a point with the appropriate transformation matrix.
p_cam = np.array([x,y,z,1])
p_world = data["cam_to_world"] @ p_cam
To get the position of the camera in the world coordinate system transform all zero vector to wrold space:
camera_pos = data["cam_to_world"] @ np.array([0,0,0,1])
Notice how the points have an extra coordinate. This is called a homogeneous component and enables translation as well as rotation with a single transformation matrix.
To reproject points written in the world coordinate system to the camera image we need the intrinsic matrix in adition to previously described extrinsic parameters. To get it use:
calibration = dai.Device().readCalibration()
intrinsic_mat = calibration.getCameraIntrinsics(dai.CameraBoardSocket.RGB, width, height)
where width
and height
represent the image width and height in pixels.
points, _ = cv2.projectPoints(
np.float64([[x1, y1, z1], [x2, y2, z2], [x3, y3, z3], [x4, y4, z4], ...]),
extrinsics["rot_vec"], extrinsics["trans_vec"], intrinsic_mat, None
)
resulting points
contains postions in pixels.