-
Notifications
You must be signed in to change notification settings - Fork 67
Camera feed and depth camera
These features only work on Linux with ROS installed. Additionally, you should have the following packages installed on your system.
cv-bridge # Can be installed via apt install ros-<version>-cv-bridge
image-transport # Can be installed via apt install ros-<version>-image-transport
Once these packages are installed, you can rebuild ambf to automatically enable support.
Now you can edit your world.yaml
file that contains the cameras for which you want to enable camera feed and/or depth pointcloud publishing.
NOTE: Cameras can be defined either in a world
file or regular ADF files (as shown here world_stereo.yaml and here stereo_cameras.yaml). The world
filepath is defined in the launch
file (as shown here world filepath in launch file). If you are using a different launch file by providing the --launch_file
argument, then most probably you are also using a different world file as defined in that launch file, so always make sure that you are setting the flags below for the correct camera for it to publish its video and/or depth.
The default world file is located here: https://github.com/WPI-AIM/ambf/blob/ambf-1.0/ambf_models/descriptions/world/world.yaml
Inspecting its contents:
enclosure size: {length: 10.0, width: 10.0, height: 3.0}
lights: [light1]
cameras: [camera1] # You can see, we are going to load a camera called camera1 which is defined later in this file
environment: "./checkered_floor.yaml"
namespace: /ambf/env/
max iterations: 50
gravity: {x: 0.0, y: 0.0, z: -9.81}
...
...
camera1:
namespace: cameras/
name: default_camera
location: {x: 4.0, y: 0.0, z: 2.0}
look at: {x: 0.0, y: 0.0, z: -0.5}
up: {x: 0.0, y: 0.0, z: 1.0}
clipping plane: {near: 0.1, far: 10.0}
field view angle: 0.8
monitor: 0
# multipass: True
# publish image: True
# publish image interval: 1 # Publish every nth scene update
# publish depth: True
# publish depth interval: 10 # Publish every nth scene update
# publish image resolution: {width: 1920, height: 1080}
By uncommenting/adding the following line, we can enable camera feed publishing
publish image: True
# publish image interval: 1 # Publish every nth scene update. Default 1
and by uncommenting this field, we can enable depth pointcloud computation and publishing
publish depth: True
# publish depth interval: 10 # Publish every nth scene update. Default 10
The camera feed is published to the following topic:
/<namespace>/<camera_name>/ImageData/*
and the depth pointcloud is published to the following topics:
/<namespace>/<camera_name>/DepthData/
You can also view either topic in RViz
.
The default resolution of published images and depth point cloud is 640x480
pixels. To change it, the following field can be set by un-comment/adding it to the camera description. :
publish image resolution: {width: 1920, height: 1080} #In pixels
In this case, the published image from the specific camera will be 1920x1080
pixels.
Here is an example image:
The AMBF Simulator
Introduction
- Installing AMBF
- Launching the Simulator
- Selecting Robot(s) to Launch
- The Python Client
- Understanding the Input Device Specification
- Understanding Command Line Arguments
- Keyboard and Mouse Shortcuts
- Mouse Control Multipliers
Useful Tools
Concepts:
- Collision Filtering
- Preprocessing Shaders
- Publishing Camera Feed and Depth Point Cloud
- Setting Per Object or Per Model Level Gravity:
Examples