diff --git a/source/Demos/Canadarm.rst b/source/Demos/Canadarm.rst index 93e1ff2..69ca5a6 100644 --- a/source/Demos/Canadarm.rst +++ b/source/Demos/Canadarm.rst @@ -1,7 +1,10 @@ +############################### Canadarm Demo -============= +############################### -TODO: -Describe how to run the Canadarm demo. -This may be simply a link to the demo repo that provides the instructions. +Run +============= +``` +$ ros2 launch canadarm canadarm.launch.py +``` \ No newline at end of file diff --git a/source/Demos/Mars-Rover.rst b/source/Demos/Mars-Rover.rst index bcdc603..d7d79d2 100644 --- a/source/Demos/Mars-Rover.rst +++ b/source/Demos/Mars-Rover.rst @@ -1,7 +1,82 @@ +############################# Mars Rover Demo +############################# + +The Mars Rover demo is a part of the Space ROS Space Robots Demos. This demo depends on the docker image uses the Space ROS docker image (*openrobotics/spaceros:latest*) as its base image. +Running the Demos +=============== +Launch the demo: +``` +$ ros2 launch mars_rover mars_rover.launch.py +``` + +On the top left corner, click on the refresh button to show camera feed. + +Perform Tasks +=============== + +Open a new terminal and attach to the currently running container: + +``` +$ docker exec -it bash +``` + +Make sure packages are sourced: + +``` +$ source ~/spaceros/install/setup.bash +``` + +``` +$ source ~/demos_ws/install/setup.bash +``` + +Available Commands =============== +Drive the rover forward + +``` +$ ros2 service call /move_forward std_srvs/srv/Empty +``` + +Stop the rover + +``` +$ ros2 service call /move_stop std_srvs/srv/Empty +``` + +Turn left + +``` +$ ros2 service call /turn_left std_srvs/srv/Empty +``` + +Turn right + +``` +$ ros2 service call /turn_right std_srvs/srv/Empty +``` + +Open the tool arm: + +``` +$ ros2 service call /open_arm std_srvs/srv/Empty +``` + +Close the tool arm: + +``` +$ ros2 service call /close_arm std_srvs/srv/Empty +``` + +Open the mast (camera arm) + +``` +$ ros2 service call /mast_open std_srvs/srv/Empty +``` -TODO: +Close the mast (camera arm) -Describe how to run the Mars rover demo. -This may be simply a link to the demo repo that provides the instructions +``` +$ ros2 service call /mast_close std_srvs/srv/Empty +``` diff --git a/source/Getting-Started/Building-From-Source.rst b/source/Getting-Started/Building-From-Source.rst index 6c17b58..17daf23 100644 --- a/source/Getting-Started/Building-From-Source.rst +++ b/source/Getting-Started/Building-From-Source.rst @@ -1,6 +1,125 @@ Building Space ROS from Source ============================== -TODO: +To use Space ROS in your project, you can build it by: +- Directly cloning the [ROS 2 fork repository](https://github.com/space-ros/space-ros +) and importing packages from the repos file. +- Setting up a Dockerized environment (recommended). + +Space ROS Fork +=========================== + + + + +Space ROS Docker Image and Earthly configuration +=========================== + +The Earthfile configuration in this directory facilitates builds of Space ROS from source code. +The generated container image is based on Ubuntu 22.04 (Jammy) and can be used with [`rocker`](https://github.com/osrf/rocker) to add X11 and GPU passthrough. + +## Building the Docker Image + +The [Earthly](https://earthly.dev/get-earthly) utility is required to build this image. + +To build the image, run: + +``` +$ ./build.sh +``` + +The build process will take about 20 or 30 minutes, depending on the host computer. + +## Running the Space ROS Docker Image in a Container + +After building the image, you can see the newly-built image by running: + +``` +$ docker image list +``` + +The output will look something like this: + +``` +$ docker image list +REPOSITORY TAG IMAGE ID CREATED SIZE +osrf/space-ros latest 109ad8fb7460 4 days ago 2.45GB +ubuntu jammy a8780b506fa4 5 days ago 77.8MB +``` + +The new image is named **osrf/space-ros:latest**. + +There is a run.sh script provided for convenience that will run the spaceros image in a container. + +``` +$ ./run.sh +``` + +Upon startup, the container automatically runs the entrypoint.sh script, which sources the Space ROS environment file (setup.bash). You'll now be running inside the container and should see a prompt similar to this: + +``` +spaceros-user@d10d85c68f0e:~/spaceros$ +``` + +At this point, you can run the 'ros2' command line utility to make sure everything is working OK: + +``` +spaceros-user@d10d85c68f0e:~/spaceros$ ros2 +usage: ros2 [-h] [--use-python-default-buffering] Call `ros2 -h` for more detailed usage. ... + +ros2 is an extensible command-line tool for ROS 2. + +optional arguments: + -h, --help show this help message and exit + --use-python-default-buffering + Do not force line buffering in stdout and instead use the python default buffering, which might be affected by PYTHONUNBUFFERED/-u and depends on whatever stdout is interactive or not + +Commands: + action Various action related sub-commands + component Various component related sub-commands + daemon Various daemon related sub-commands + doctor Check ROS setup and other potential issues + interface Show information about ROS interfaces + launch Run a launch file + lifecycle Various lifecycle related sub-commands + multicast Various multicast related sub-commands + node Various node related sub-commands + param Various param related sub-commands + pkg Various package related sub-commands + run Run a package specific executable + service Various service related sub-commands + topic Various topic related sub-commands + trace Trace ROS nodes to get information on their execution + wtf Use `wtf` as alias to `doctor` + + Call `ros2 -h` for more detailed usage. +``` + +Connecting Another Terminal to a Running Docker Container +=========================== + +Sometimes it may be convenient to attach additional terminals to a running Docker container. + +With the Space ROS Docker container running, open a second host terminal and then run the following command to determine the container ID: + +``` +$ docker container list +``` + +The output will look something like this: + +``` +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d10d85c68f0e openrobotics/spaceros "/entrypoint.sh …" 28 minutes ago Up 28 minutes inspiring_moser +``` + +The container ID in this case, is *d10d85c68f0e*. So, run the following command in the host terminal: + +``` +docker exec -it d10d85c68f0e /bin/bash --init-file "install/setup.bash" +``` + +You will then be at a prompt in the same running container. + +In place of the container ID, you can also use the automatically-generated container name ("inspiring_moser" in this case). -Describe how to build Space ROS from source code (presumably using Earthly). diff --git a/source/Getting-Started/Docker-Images.rst b/source/Getting-Started/Docker-Images.rst index e1822e1..676b5ef 100644 --- a/source/Getting-Started/Docker-Images.rst +++ b/source/Getting-Started/Docker-Images.rst @@ -1,9 +1,28 @@ +############################### The Space ROS Docker Images -=========================== +############################### -TODO: Describe how to use the Space ROS docker image from Docker Hub, and any other images published to Docker Hub. * Docker Hub * Base image, MoveIt2, demos, etc. + + +The Space ROS project provides a layered configuration of Docker containers to build different functionalities on top. + +Overview + +The base layer is the `spaceros` docker image, see [] to build it. + + + +Next steps +=========================== + + + + +Related Content +=========================== +Tutorial earthly diff --git a/source/How-To-Guides/Use-Simulation-Assets.rst b/source/How-To-Guides/Use-Simulation-Assets.rst index d9a207f..9867d41 100644 --- a/source/How-To-Guides/Use-Simulation-Assets.rst +++ b/source/How-To-Guides/Use-Simulation-Assets.rst @@ -1,7 +1,7 @@ +############################### Using Space-Related Simulation Assets -===================================== - -TODO: +############################### -How to use the provided simulation assets in a new project. +How to use the provided simulation assets in a new project? +===================================== diff --git a/source/Tutorials/Run-On-RTEMS.rst b/source/Tutorials/Run-On-RTEMS.rst index 6ef782e..4f8e2ce 100644 --- a/source/Tutorials/Run-On-RTEMS.rst +++ b/source/Tutorials/Run-On-RTEMS.rst @@ -1,8 +1,141 @@ Running Space ROS on RTEMS ========================== -TODO: * Provide some background on RTEMS -* Describe how to run Space ROS on RTEMS +* How to run Space ROS on RTEMS +**Goal:** Create an hard real-time embedded application that runs on a Xilinx Zynq SoC FPGA which will communicate with Space-ROS. + +This demonstration will be on [QEMU](https://www.qemu.org), the open-source CPU and system emulator. This is for several reasons: + * Development cycles in QEMU are much faster and less painful than hardware. + * Given the state of the semiconductor industry at time of writing (late 2022), it is nearly impossible to obtain Zynq development boards. + * QEMU-based work requires no upfront hardware costs or purchasing delays + +There are a number of reasonable choices available when selecting a real-time OS (RTOS). +We selected [RTEMS](https://www.rtems.org/) because it is fully open-source and has a long history of successful deployments in space applications. + +Build +------------- + +To simplify collecting and compiling dependencies on a wide range of systems, we have created a Docker container that contains everything. +You will need to install Docker on your system first, for example, using `sudo apt install docker.io` +Then, run this [script](https://github.com/space-ros/docker/blob/zynq_rtems/zynq_rtems/build_dependencies.sh): + +.. code-block:: console + + cd /path/to/zynq_rtems + ./build_dependencies.sh + +This will build the [zynq_rtems Dockerfile](https://github.com/space-ros/docker/blob/zynq_rtems_zenoh_pico/zynq_rtems/Dockerfile), which builds QEMU, a cross-compile toolchain for the ARMv8 processor inside the Zynq SoC, and RTEMS from source in the container. +This will typically take at least 10 minutes, and can take much longer if either your network connection or compute resources is limited. + +Next, we will use this "container full of dependencies" to compile a sample application. + +.. code-block:: console + + cd /path/to/zynq_rtems + ./compile_demos.sh + +The demo application and its cross-compiled RTEMS kernel will be accessible both "inside" and "outside" the container for ease of source control, editing cycles, and debugging. +The build products will land in `zynq_rtems/hello_zenoh/build`. + +Run +------------- + +The emulated system that will run inside QEMU needs to have a way to talk to a virtual network segment on the host machine. +We'll create a TAP device for this. +The following script will set this up, creating a virtual `10.0.42.x` subnet for a device named `tap0`: +.. code-block:: console + + ./start_network_tap.sh + +We will need three terminals for this demo: + * Zenoh router + * Zenoh subscriber + * Zenoh-Pico publisher (in RTEMS in QEMU) + +First, we will start a Zenoh router: +.. code-block:: console + + cd /path/to/zynq_rtems + cd hello_zenoh + ./run_zenoh_router + +This will print a bunch of startup information and then continue running silently, waiting for inbound Zenoh traffic. Leave this terminal running. + +In the second terminal, we'll run the Zenoh subscriber example: +.. code-block:: console + + cd /path/to/zynq_rtems + cd hello_zenoh + ./run_zenoh_subscriber + +In the third terminal, we will run the RTEMS-based application, which will communicate with the Zenoh router and thence to the Zenoh subscriber. +The following script will run QEMU inside the container, with a volume-mount of the `hello_zenoh` demo application so that the build products from the previous step are made available to the QEMU that was built inside the container. +.. code-block:: console + cd /path/to/zynq_rtems + cd hello_zenoh + ./run_rtems.sh + +The terminal should print a bunch of information about the various emulated Zynq network interfaces and their routing information. +After that, it should contact the `zenohd` instance running in the other terminal. It should print something like this: + +.. code-block:: console + + Opening zenoh session... + Zenoh session opened. + Own ID: 0000000000000000F45E7E462568C23B + Routers IDs: + B2FE444C3B454E27BCB11DF83120D927 + Peers IDs: + Stopping read and lease tasks... + sending a few messages... + publishing: Hello, world! 0 + publishing: Hello, world! 1 + publishing: Hello, world! 2 + publishing: Hello, world! 3 + publishing: Hello, world! 4 + publishing: Hello, world! 5 + publishing: Hello, world! 6 + publishing: Hello, world! 7 + publishing: Hello, world! 8 + publishing: Hello, world! 9 + Closing zenoh session... + Done. Goodbye. + +The second terminal, running a Zenoh example subscriber, should print something like this: + +.. code-block:: console + + Declaring Subscriber on 'example'... + [2022-12-06T21:41:11Z DEBUG zenoh::net::routing::resource] Register resource example + [2022-12-06T21:41:11Z DEBUG zenoh::net::routing::pubsub] Register client subscription + [2022-12-06T21:41:11Z DEBUG zenoh::net::routing::pubsub] Register client subscription example + [2022-12-06T21:41:11Z DEBUG zenoh::net::routing::pubsub] Register subscription example for Face{0, 5F6D54C4366D42EDB367F17A5A2CACCD} + Enter 'q' to quit... + >> [Subscriber] Received PUT ('example': 'Hello, world! 0') + >> [Subscriber] Received PUT ('example': 'Hello, world! 1') + >> [Subscriber] Received PUT ('example': 'Hello, world! 2') + >> [Subscriber] Received PUT ('example': 'Hello, world! 3') + >> [Subscriber] Received PUT ('example': 'Hello, world! 4') + >> [Subscriber] Received PUT ('example': 'Hello, world! 5') + >> [Subscriber] Received PUT ('example': 'Hello, world! 6') + >> [Subscriber] Received PUT ('example': 'Hello, world! 7') + >> [Subscriber] Received PUT ('example': 'Hello, world! 8') + >> [Subscriber] Received PUT ('example': 'Hello, world! 9') + + +After that output, the RTEMS shutdown will display the various RTEMS threads running and their memory usage. + +This showed that the Zenoh Pico client running in RTEMS successfully reached the Zenoh router running natively on the host. +Success! +This is a good thing. + +Clean up +------------- + +If you would like, you can now remove the network tap device that we created in the previous step: +.. code-block:: console + + zynq_rtems/stop_network_tap.sh