Skip to content

Commit

Permalink
Merge pull request #104 from masayag/add-main-dir
Browse files Browse the repository at this point in the history
Add main directory for main branch content
  • Loading branch information
masayag authored Sep 25, 2024
2 parents 199704f + e35c6e8 commit cdfc081
Show file tree
Hide file tree
Showing 36 changed files with 549 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,6 @@ When [RHDH](https://developers.redhat.com/rhdh) is already installed and in use,
- For your convenience, a [reference implementation](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/postgresql/README.md) is provided.
- If you already have a PostgreSQL database installed, please refer to this [note](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/postgresql/README.md#note-the-default-settings-provided-in-postgresql-values-match-the-defaults-provided-in-the-orchestrator-values) regarding default settings.

In this approach, since the RHDH instance is not managed by the Orchestrator operator, its configuration is handled through the Backstage CR along with the associated resources, such as ConfigMaps and Secrets.

The installation steps are detailed [here](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/release-1.2/existing-rhdh.md).
3 changes: 3 additions & 0 deletions content/1.2/docs/installation/orchestrator.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ Installing the Orchestrator is facilitated through an operator available in the
The Orchestrator is based on the [SonataFlow](https://sonataflow.org/serverlessworkflow/latest/index.html) and the [Serverless Workflow](https://serverlessworkflow.io/) technologies to design and manage the workflows.
The Orchestrator plugins are deployed on a [Red Hat Developer Hub
](https://developers.redhat.com/rhdh/overview) instance, which serves as the frontend.

When installing a Red Hat Developer Hub (RHDH) instance using the Orchestrator operator, the RHDH configuration is managed through the Orchestrator resource.

To utilize *Backstage* capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.

{{< remoteMD "https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/release-1.2/README.md?raw=true" >}}
3 changes: 2 additions & 1 deletion content/snapshot/_index.md → content/main/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ cascade:
- toc_root: true
type: docs
---
## The content for the snapshot version is coming soon.


12 changes: 12 additions & 0 deletions content/main/docs/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
title: "Documentation"
date: 2024-02-20
cascade:
- toc_root: true
type: docs
---

# Orchestrator

Choose a section from the list bellow. For Orchestrator introduction, check the [Quick Start](./quickstart/).

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 19 additions & 0 deletions content/main/docs/architecture/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
title: "Architecture"
weight: 1
---
The Orchestrator architecture comprises several integral components, each contributing to the seamless execution and management of workflows. Illustrated below is a breakdown of these components:

- [**Red Hat Developer Hub**](https://developers.redhat.com/rhdh/overview): Serving as the primary interface, Backstage fulfills multiple roles:
- [Orchestrator Plugins](https://github.com/janus-idp/backstage-plugins/tree/main/plugins/orchestrator): Both frontend and backend plugins are instrumental in presenting deployed workflows for execution and monitoring.
- [Notifications Plugin](https://backstage.io/docs/notifications): Employs notifications to inform users or groups about workflow events.
- [**Sonataflow Operator**](https://sonataflow.org/serverlessworkflow/main/cloud/operator/install-serverless-operator.html): This controller manages the Sonataflow custom resource (CR), where each CR denotes a deployed workflow.
- [**Sonataflow Runtime**](https://github.com/apache/incubator-kie-kogito-runtimes): As a deployed workflow, Sonataflow Runtime is currently managed as a Kubernetes (K8s) deployment by the operator. It operates as an HTTP server, catering to requests for executing workflow instances. Within the Orchestrator deployment, each Sonataflow CR corresponds to a singular workflow. However, outside this scope, Sonataflow Runtime can handle multiple workflows. Interaction with Sonataflow Runtime for workflow execution is facilitated by the Orchestrator backend plugin.
- [**Data Index Service**](https://sonataflow.org/serverlessworkflow/latest/data-index/data-index-core-concepts.html): This serves as a repository for workflow definitions, instances, and their associated jobs. It exposes a GraphQL API, utilized by the Orchestrator backend plugin to retrieve workflow definitions and instances.
- [**Job Service**](https://sonataflow.org/serverlessworkflow/latest/job-services/core-concepts.html): Dedicated to orchestrating scheduled tasks for workflows.
- [**OpenShift Serverless**](https://docs.openshift.com/serverless/1.33/about/about-serverless.html): This operator furnishes serverless capabilities essential for workflow communication. It employs Knative eventing to interface with the Data Index service and leverages Knative functions to introduce more intricate logic to workflows.
- [**OpenShift AMQ Streams**](https://access.redhat.com/documentation/en-us/red_hat_amq_streams/2.6/html/amq_streams_on_openshift_overview/index) (Strimzi/Kafka): While not presently integrated into the deployment's current iteration, this operator is crucial for ensuring the reliability of the eventing system.
- [**KeyCloak**](https://www.keycloak.org/): Responsible for authentication and security services within applications. While not installed by the Orchestrator operator, it is essential for enhancing security measures.
- [**PostgreSQL Server**](https://www.postgresql.org/) - Utilized for storing both Sonataflow information and Backstage data, PostgreSQL Server provides a robust and reliable database solution essential for data persistence within the Orchestrator ecosystem.

![Architecture Diagram](./architecture-diagram.png "Architecture Diagram")
4 changes: 4 additions & 0 deletions content/main/docs/core-concepts/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: "Core Concepts"
weight: 2
---
75 changes: 75 additions & 0 deletions content/main/docs/core-concepts/workflow-types/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "Workflow Types"
date: 2024-05-07
---

The Orchestrator features two primary workflow categories:
- *Infrastructure workflows*: focus on automating infrastructure-related tasks
- *Assessment workflows*: focus on evaluating and analyzing data to suggest suitable infrastructure workflow options for subsequent execution

### Infrastructure workflow
In the Orchestrator, an infrastructure refers to a workflow that executes a sequence of operations based on user input (optional) and generates output (optional) without requiring further action.

To define this type, developers need to include the following annotation in the workflow definition file:

```yaml
annotations:
- "workflow-type/infrastructure"
```
The Orchestrator plugin utilizes this metadata to facilitate the processing and visualization of infrastructure workflow inputs and outputs within the user interface.
##### Examples:
- [Greeting](https://github.com/parodos-dev/serverless-workflows/blob/main/greeting/greeting.sw.yaml)
- [Ticket Escalation](https://github.com/parodos-dev/serverless-workflows/blob/main/escalation/ticketEscalation.sw.yaml)
- [Move2Kube](https://github.com/parodos-dev/serverless-workflows/blob/main/move2kube/m2k.sw.yml)
### Assessment workflow
In the Orchestrator, an assessment is akin to an infrastructure workflow that concludes with a recommended course of action.
Upon completion, the assessment yields a *workflowOptions* object, which presents a list of infrastructure workflows suitable from the user's inputs evaluation.
To define this type, developers must include the following annotation in the workflow definition file:
```yaml
annotations:
- "workflow-type/assessment"
```
The Orchestrator plugin utilizes this metadata to facilitate the processing and visualization of assessment workflow inputs and outputs within the user interface.
This includes generating links to initiate infrastructure workflows from the list of recommended options, enabling seamless execution and integration.
The *workflowOptions* object must possess six essential attributes with specific types, including lists that can be empty or contain objects with `id` and `name` properties, similar to the `currentVersion` attribute. See an example in the below code snippet.

*It is the assessment workflow developer's responsibility to ensure that the provided workflow **id** in each workflowOptions attribute exists and is available in the environment.*

```json
{
"workflowOptions": {
"currentVersion": {
"id": "_AN_INFRASTRUCTURE_WORKFLOW_ID_",
"name": "_AN_INFRASTRUCTURE_WORKFLOW_NAME_"
},
"newOptions": [],
"otherOptions": [],
"upgradeOptions": [],
"migrationOptions": [
{
"id": "_ANOTHER_INFRASTRUCTURE_WORKFLOW_ID_",
"name": "_ANOTHER_INFRASTRUCTURE_WORKFLOW_NAME_"
}
],
"continuationOptions": []
}
}
```

##### Examples:
- [MTA](https://github.com/parodos-dev/serverless-workflows/blob/main/mta/mta.sw.yaml)
- [Dummy Assessment](https://github.com/parodos-dev/serverless-workflow-examples/tree/main/assessment)

#### Note
If the aforementioned annotation is missing in the workflow definition file, the Orchestrator plugin will default to treating the workflow as an infrastructure workflow, without considering its output.

To avoid unexpected behavior and ensure clarity, it is strongly advised to always include the annotation to explicitly specify the workflow type, preventing any surprises or misinterpretations.

11 changes: 11 additions & 0 deletions content/main/docs/installation/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: "Installation"
date: 2024-09-10
weight: 3
---

The deployment of the orchestrator involves multiple independent components, each with its unique installation process. In an OpenShift Cluster, the Red Hat Catalog provides an operator that can handle the installation for you. This installation process is modular, as the CRD exposes various flags that allow you to control which components to install. For a vanilla Kubernetes, there is a helm chart that installs the orchestrator compoments.

The Orchestrator deployment encompasses the installation of the engine for serving serverless workflows and Backstage, integrated with orchestrator plugins for workflow invocation, monitoring, and control.

In addition to the Orchestrator deployment, we offer several *workflows* (linked below) that can be deployed using their respective installation methods.
21 changes: 21 additions & 0 deletions content/main/docs/installation/installation-on-existing-rhdh.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
date: 2024-03-13
title: "Orchestrator on existing RHDH instance"
---

When [RHDH](https://developers.redhat.com/rhdh) is already installed and in use, reinstalling it is unnecessary. Instead, integrating the Orchestrator into such an environment involves a few key steps:

1. Utilize the Orchestrator operator to install the requisite components, such as the OpenShift Serverless Logic Operator and the OpenShift Serverless Operator, while ensuring the RHDH installation is disabled.
2. Manually update the existing RHDH ConfigMap resources with the necessary configuration for the Orchestrator plugin.
3. Import the Orchestrator software templates into the Backstage catalog.

## Prerequisites
- RHDH is already deployed with a running Backstage instance.
- Software templates for workflows requires GitHub provider to be configured.
- Ensure that a [PostgreSQL](https://www.postgresql.org/) database is available and that you have credentials to manage the tablespace (optional).
- For your convenience, a [reference implementation](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/postgresql/README.md) is provided.
- If you already have a PostgreSQL database installed, please refer to this [note](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/postgresql/README.md#note-the-default-settings-provided-in-postgresql-values-match-the-defaults-provided-in-the-orchestrator-values) regarding default settings.

In this approach, since the RHDH instance is not managed by the Orchestrator operator, its configuration is handled through the Backstage CR along with the associated resources, such as ConfigMaps and Secrets.

The installation steps are detailed [here](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/main/existing-rhdh.md).
45 changes: 45 additions & 0 deletions content/main/docs/installation/orchestrator-k8s.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
title: "Orchestrator on Kubernetes"
date: 2024-04-09
---

The following guide is for installing on a Kubernetes cluster. It is well tested and working in CI with a [kind](https://kind.sigs.k8s.io/) installation.

Here's a kind configuration that is easy to work with (the apiserver port is static, so the kubeconfig is always the same)
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "127.0.0.1"
apiServerPort: 16443
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
- |
kind: KubeletConfiguration
localStorageCapacityIsolation: true
extraPortMappings:
- containerPort: 80
hostPort: 9090
protocol: TCP
- containerPort: 443
hostPort: 9443
protocol: TCP
- role: worker
```
Save this file as `kind-config.yaml`, and now run:
```bash
kind create cluster --config kind-config.yaml
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'
```

The cluster should be up and running with [Contour ingress-controller](https://projectcontour.io) installed, so localhost:9090 will direct the traffic to Backstage, because of the ingress created by the helm chart on port 80.

{{< remoteMD "https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-chart/main/charts/orchestrator-k8s/README.md" >}}
15 changes: 15 additions & 0 deletions content/main/docs/installation/orchestrator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: "Orchestrator on OpenShift"
date: 2024-09-10
---

Installing the Orchestrator is facilitated through an operator available in the Red Hat Catalog in the OLM package. This operator is responsible for installing all of the Orchestrator components.
The Orchestrator is based on the [SonataFlow](https://sonataflow.org/serverlessworkflow/latest/index.html) and the [Serverless Workflow](https://serverlessworkflow.io/) technologies to design and manage the workflows.
The Orchestrator plugins are deployed on a [Red Hat Developer Hub
](https://developers.redhat.com/rhdh/overview) instance, which serves as the frontend.

When installing a Red Hat Developer Hub (RHDH) instance using the Orchestrator operator, the RHDH configuration is managed through the Orchestrator resource.

To utilize *Backstage* capabilities, the Orchestrator imports software templates designed to ease the development of new workflows and offers an opinionated method for managing their lifecycle by including CI/CD resources as part of the template.

{{< remoteMD "https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/main/README.md?raw=true" >}}
6 changes: 6 additions & 0 deletions content/main/docs/installation/workflows/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
title: "Workflows"
date: 2024-03-03
---

In addition to deploying the Orchestrator, we provide several preconfigured workflows that serve either as ready-to-use solutions or as starting points for customizing workflows according to the user's requirements. These workflows can be installed either through a Helm chart or by utilizing Kustomize.
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
title: Deploy From Helm Repository
date: "2024-02-20"
---

{{< remoteMD "https://github.com/parodos-dev/serverless-workflows-config/blob/gh-pages/docs/README.md?raw=true" >}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
title: Deploy From Kustomize
date: "2024-02-20"
---

{{< remoteMD "https://github.com/parodos-dev/serverless-workflows-config/blob/main/kustomize/README.md?raw=true" >}}
5 changes: 5 additions & 0 deletions content/main/docs/plugins/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: "Plugins"
date: 2024-03-28
Weight: 6
---
Loading

0 comments on commit cdfc081

Please sign in to comment.