Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RTX Code Release (PI-4): T-Route RnR Enhancements to PI-3 Deliverables through v2.2 Hydrofabric integration #850

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 52 additions & 2 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,7 +1,57 @@
“Software code created by U.S. Government employees is not subject to copyright
This project contains both software components and data components, each under their respective licenses:

SOFTWARE LICENSE
-----------------

Copyright 2024 Raytheon Company

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

Licensed under: https://opensource.org/license/bsd-2-clause

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

All rights reserved. Based on Government sponsored work under contract GS-35F-204GA.

-----------------

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

-----------------

"Software code created by U.S. Government employees is not subject to copyright
in the United States (17 U.S.C. §105). The United States/Department of Commerce
reserve all rights to seek and obtain copyright protection in countries other
than the United States for Software authored in its entirety by the Department
of Commerce. To this end, the Department of Commerce hereby grants to Recipient
a royalty-free, nonexclusive license to use, copy, and create derivative works
of the Software outside of the United States.”
of the Software outside of the United States."

DATA LICENSE
-----------------

Geospatial data files contained within this repository were derrived from the Cloud Native Water Resource Modeling Hydrofabric dataset and are made available under the Open Database License (ODbL).

You are free to use, copy, distribute, transmit and adapt our data, as long as you credit Raytheon and its contributors.

Citation:
Johnson, J. M. (2022). National Hydrologic Geospatial Fabric (hydrofabric) for the Next Generation (NextGen) Hydrologic Modeling Framework,
HydroShare http://www.hydroshare.org/resource/129787b468aa4d55ace7b124ed27dbde

Any rights in individual contents of the dataset are licensed under the Database Contents License.

For the complete ODbL license text, visit: http://opendatacommons.org/licenses/odbl/1.0/
For the complete Database Contents License, visit: http://opendatacommons.org/licenses/dbcl/1.0/s
102 changes: 102 additions & 0 deletions doc/api/api_docs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# T-Route FastAPI

The following doc is meant to explain the T-Route FastAPI implementation using docker compose and shared volumes.

## Why an API?

T-Route is used in many contexts for hydrological river routing:
- NGEN
- Scientific Python Projects
- Replace and Route (RnR)

In the latest PR for RnR (https://github.com/NOAA-OWP/hydrovis/pull/865), there is an requirement to run T-Route as a service. This service requires an easy way to dynamically create config files, restart flow from Initial Conditions, and run T-Route. To satisfy this requirement, a FastAPI endpoint was created in `/src/app` along with code to dynamically create t-route endpoints.

## Why use shared volumes?

Since T-Route is running in a docker container, there has to be a connection between the directories on your machine and the directories within the container. We're sharing the following folders by default:
- `data/rfc_channel_forcings`
- For storing RnR RFC channel domain forcing files (T-Route inputs)
- `data/rfc_geopackage_data`
- For storing HYFeatures gpkg files
- Indexed by the NHD COMID, also called hf_id. Ex: 2930769 is the hf_id for the CAGM7 RFC forecast point.
- `data/troute_restart`
- For storing TRoute Restart files
- `data/troute_output`
- For outputting results from the T-Route container

## Quickstart
1. From the Root directory, run:
```shell
docker compose --env-file docker/test_troute_api.env -f docker/compose.yaml up --build
```

This will start the T-Route container and run the API on localhost:8004. To view the API spec and swagger docs, visit localhost:8004/docs

2. Submit a request
```shell
curl -X 'GET' \
'http://localhost:8004/api/v1/flow_routing/v4/?lid=CAGM7&feature_id=2930769&hy_id=1074884&initial_start=0&start_time=2024-08-24T00%3A00%3A00&num_forecast_days=5' \
-H 'accept: application/json'
```

This curl command is pinging the flow_routing v4 endpoint `api/v1/flow_routing/v4/` with the following metadata:
```
lid=CAGM7
feature_id=2930769
hy_id=1074884
initial_start=0
start_time=2024-08-24T00:00:00
num_forecast_days=5
```

which informs T-Route which location folder to look at, what feature ID to read a gpkg from, the HY feature_id where flow is starting, the initial start flow for flow restart, start-time of the routing run, and the number of days to forecast.

You can also run the following args from the swagger endpoint:

![alt text](swagger_endpoints.png)

The result for a successful routing is a status 200:
```json
{
"message": "T-Route run successfully",
"lid": "CAGM7",
"feature_id": "2930769"
}
```

and an internal 500 error if there is something wrong.

## Building and pushing to a container registry

To ensure Replace and Route is using the correct version of T-Route, it is recommended a docker container be built, and then pushed to a registry (Dockerhub, GitHub Container Registry, etc). To do this manually for the GitHub container registry, the following commands should be used within a terminal.

```shell
docker login --username ${GH_USERNAME} --password ${GH_TOKEN} ghcr.io
```
- This command will log the user into the GitHub container registry using their credentials

```shell
docker build -t ghcr.io/NOAA-OWP/t-route/t-route-api:${TAG} -f docker/Dockerfile.troute_api
```
- This command builds the T-Route API container using a defined version `${TAG}`

```shell
docker push ghcr.io/NOAA-OWP/t-route/t-route-api:${TAG}
```
- This commands pushes the built T-Route API container to the NOAA-OWP/t-route container registry


The following env variables are used:
- `${GH_USERNAME}`
- your github username
- `${GH_TOKEN}`
- your github access token
- `${TAG} `
- the version tag
- ex: 0.0.2

If you want to build this off a forked version, change the container registry (`/NOAA-OWP/t-route/`) to your user accounts container registry.

## Testing:

See `test/api/README.md` for testing information
Binary file added doc/api/api_spec.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/api/swagger_endpoints.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
38 changes: 38 additions & 0 deletions doc/docker/dockerfile_notebook.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Dockerfile.Notebook

This document describes the Docker setup for running JupyterLab with mounted volumes for development and analysis.

## Container Overview

The container provides a JupyterLab environment with:
- Python environment for data analysis
- Web interface accessible via port 8000

This container is a great way to run examples and integrated tests

## Docker Configuration

### Dockerfile
The Dockerfile sets up:
- Base Python environment
- JupyterLab installation
- Volume mount points for data and code
- Port 8000 exposed for web interface
- Working directory configuration

### Getting Started

Build:
```bash
docker build -t troute-notebook -f docker/Dockerfile.notebook .
```

Run:
```bash
docker run -p 8000:8000 troute-notebook
```

Then, take the URL from the output and put that into your browser. An example one is below:
```
http://127.0.0.1:8000/lab?token=<token>
```
24 changes: 24 additions & 0 deletions docker/Dockerfile.compose
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
FROM rockylinux:9.2 as rocky-base
RUN yum install -y epel-release
RUN yum install -y netcdf netcdf-fortran netcdf-fortran-devel netcdf-openmpi

RUN yum install -y git cmake python python-devel pip

WORKDIR /t-route/
COPY . /t-route/

RUN ln -s /usr/lib64/gfortran/modules/netcdf.mod /usr/include/openmpi-x86_64/netcdf.mod

ENV PYTHONPATH=/t-route:$PYTHONPATH
RUN pip install uv==0.2.5
RUN uv venv

ENV VIRTUAL_ENV=/t-route/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN uv pip install --no-cache-dir -r /t-route/requirements.txt

RUN ./compiler.sh no-e

WORKDIR "/t-route/src/"
RUN mkdir -p /t-route/data/troute_restart/
29 changes: 29 additions & 0 deletions docker/Dockerfile.notebook
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
FROM rockylinux:9.2 as rocky-base
RUN yum install -y epel-release
RUN yum install -y netcdf netcdf-fortran netcdf-fortran-devel netcdf-openmpi

RUN yum install -y git cmake python python-devel pip

WORKDIR "/t-route/"

COPY . /t-route/

RUN ln -s /usr/lib64/gfortran/modules/netcdf.mod /usr/include/openmpi-x86_64/netcdf.mod

ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV

# Equivalent to source /opt/venv/bin/activate
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN python -m pip install .
RUN python -m pip install .[jupyter]
RUN python -m pip install .[test]

RUN ./compiler.sh no-e

EXPOSE 8000

# increase max open files soft limit
RUN ulimit -n 10000
CMD ["jupyter", "lab", "--ip=0.0.0.0", "--port=8000", "--no-browser", "--allow-root"]
37 changes: 37 additions & 0 deletions docker/Dockerfile.troute_api
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
FROM rockylinux:9.2 as rocky-base

RUN yum install -y epel-release
RUN yum install -y netcdf netcdf-fortran netcdf-fortran-devel netcdf-openmpi

RUN yum install -y git cmake python python-devel pip

WORKDIR "/t-route/"

# Copy the contents of the parent directory (repository root) into the container
COPY . /t-route/

RUN ln -s /usr/lib64/gfortran/modules/netcdf.mod /usr/include/openmpi-x86_64/netcdf.mod

ENV PYTHONPATH=/t-route:$PYTHONPATH
RUN pip install uv==0.2.5
RUN uv venv

ENV VIRTUAL_ENV=/t-route/.venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

RUN uv pip install --no-cache-dir -r /t-route/requirements.txt

RUN ./compiler.sh no-e

WORKDIR "/t-route/src/"
RUN mkdir -p /t-route/data/troute_restart/

# Create volume mount points
RUN mkdir -p ${OUTPUT_VOLUME_TARGET} ${DATA_VOLUME_TARGET} ${CORE_VOLUME_TARGET} /t-route/test

# Set the command to run the application
CMD sh -c ". /t-route/.venv/bin/activate && uvicorn app.main:app --host 0.0.0.0 --port ${PORT}"

# Add healthcheck
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
CMD curl --fail -I http://localhost:${PORT}/health || exit 1
27 changes: 27 additions & 0 deletions docker/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
services:
troute:
build:
context: ..
dockerfile: docker/Dockerfile.compose
ports:
- "${PORT}:${PORT}"
volumes:
- type: bind
source: ${OUTPUT_VOLUME_SOURCE}
target: ${OUTPUT_VOLUME_TARGET}
- type: bind
source: ${DATA_VOLUME_SOURCE}
target: ${DATA_VOLUME_TARGET}
- type: bind
source: ${CORE_VOLUME_SOURCE}
target: ${CORE_VOLUME_TARGET}
- type: bind
source: ${TEST_SOURCE}
target: ${TEST_TARGET}
command: sh -c ". /t-route/.venv/bin/activate && uvicorn app.main:app --host 0.0.0.0 --port ${PORT}"
healthcheck:
test: curl --fail -I http://localhost:${PORT}/health || exit 1
interval: 30s
timeout: 5s
retries: 3
start_period: 5s
25 changes: 25 additions & 0 deletions docker/rnr_compose.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Port mapping
#-------------
# The following port will be used for spinning up the API server

PORT=8000

# Volume bindings
# ---------------
# The following variables are used in the compose.yaml file to define the shared volume mount with T-Route

# For saving output from the container
OUTPUT_VOLUME_SOURCE=../data/troute_output
OUTPUT_VOLUME_TARGET=/t-route/output

# For mounting the data directory
DATA_VOLUME_SOURCE=../data
DATA_VOLUME_TARGET=/t-route/data

# For mounting all core files within T-Route (Used for sharing template config files)
CORE_VOLUME_SOURCE=../src/app/core
CORE_VOLUME_TARGET=/t-route/src/app/core

# For uploading test data scripts
TEST_SOURCE=../test
TEST_TARGET=/t-route/test
25 changes: 25 additions & 0 deletions docker/test_troute_api.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Port mapping
#-------------
# The following port will be used for spinning up the API server

PORT=8000

# Volume bindings
# ---------------
# The following variables are used in the compose.yaml file to define the shared volume mount with T-Route

# For saving output from the container
OUTPUT_VOLUME_SOURCE=../test/api/data/troute_output
OUTPUT_VOLUME_TARGET=/t-route/output

# For mounting the data directory
DATA_VOLUME_SOURCE=../test/api/data
DATA_VOLUME_TARGET=/t-route/data

# For mounting all core files within T-Route (Used for sharing template config files)
CORE_VOLUME_SOURCE=../src/app/core
CORE_VOLUME_TARGET=/t-route/src/app/core

# For uploading test data scripts
TEST_SOURCE=../test
TEST_TARGET=/t-route/test
Loading