In this section, we will explore how to manage the placement of applications using the integration between Advanced Cluster Management (ACM) and Argo CD. This integration ensures that when a new OpenShift Cluster is imported into ACM, it becomes available in the Argo CD Controller. By leveraging one of the ApplicationSet
Generators, specifically the Cluster Decision Resource Generator, ACM can control the target location for application deployments.
To streamline the use of the Cluster Decision Resource Generator, ACM offers an API known as the Placement API. This API allows you to design the desired placement behavior by configuring a Placement
object.
In this case, when we need to include a change (ie. remove the APP from the Cloud clusters and deploy it in the Edge Clusters), We will need to modify the Cluster metadata (with the ClusterClaim
objects as we will see). In this demo we are doing it manually directly on the OpenShift clusters, but this could also be done using ACM as a central point, similar to what we found during the first section:
To utilize the Placement API and create the necessary Placement
object, follow these steps:
- Access your OpenShift console in the Hub cluster.
- Click the
+
button to add resources. - Paste the following content:
apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
annotations:
argocd.argoproj.io/sync-wave: "3"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
name: demo-placement
namespace: openshift-gitops
spec:
clusterSets:
- cloud
- edge
numberOfClusters: 1
predicates:
- requiredClusterSelector:
claimSelector:
matchExpressions:
- key: demo.status
operator: In
values:
- good
- average
prioritizerPolicy:
configurations:
- scoreCoordinate:
builtIn: ResourceAllocatableCPU
weight: 2
- scoreCoordinate:
builtIn: ResourceAllocatableMemory
weight: 2
There are three main components in this descriptor:
-
clusterSets
: Defines the "groups" of clusters configured in ACM that are part of the placement scheduling. -
predicates
: Specifies filters to exclude clusters from those groups, based on metadata generated by aClusterClaim
object. -
prioritizerPolicy
: Determines where to deploy if thenumberOfClusters
is less than the number of available clusters.
Based on the configuration above, we will select a single cluster (numberOfClusters: 1
) from the clusters in the clusterSets
"cloud" and "edge" that have the metadata demo.status
set to "good" or "average". If more than one cluster meets these criteria, the one with more available CPU and memory will be chosen.
In this scenario, we simulate that an external system tags the OpenShift clusters using a ClusterClaim
, indicating whether the cluster is in a good state, has minor issues but is operational, or is in a bad state. During the demo, we will manually change these tags by modifying the ClusterClaim
object to mimic the actions of this external system.
NOTE
This is just an example. You can explore more options for configuration, such as selecting clusters from the
clustersets
by labeling them, or creating subgroups of clusters and specifying the number of APP deployments per group using thedecisionStrategy
section.
Two actions are required: first, assigning the imported clusters to the appropriate clusterSet
, and then assigning the desired metadata to each cluster using the ClusterClaim
object.
Let's start by assigning clusters to the ClusterSets
:
- Access the ACM console by selecting "All Clusters" in the top left on the Hub cluster.
- Go to Infrastructure > Clusters and click on the "Cluster Sets" tab.
- Select the "cloud"
clusterSet
and assign thelocal-cluster
to it, then assign the edge-1 cluster to the "edge"clusterSet
.
Now, since we will start by deploying on the cloud cluster, we will assign the value "good" to the local-cluster
and "bad" to the edge-1
by creating the corresponding ClusterClaim
object in each cluster.
First, in the local-cluster
:
-
Access your OpenShift console in the Hub cluster.
-
Click the
+
button to add resources. -
Paste the following content:
apiVersion: cluster.open-cluster-management.io/v1alpha1 kind: ClusterClaim metadata: name: demo.status spec: value: good
Then in the edge-1
cluster
-
Access your OpenShift console in the edge-1 cluster.
-
Click the
+
button to add resources. -
Paste the following content:
apiVersion: cluster.open-cluster-management.io/v1alpha1 kind: ClusterClaim metadata: name: demo.status spec: value: bad
NOTE
We are creating these objects at this moment. From now on, you don't create them again, you will need to modify the already existing manifest.
Now everyting should be ready. With this configuration only the local-cluster
will be selected for deployment. You can check that this is true by opening the corresponding PlacementDecision
object linked to the Placement
that we created:
- Access your OpenShift console in the Hub cluster.
- Go to "Home > API Explorer" and look for "PlacementDecision" objects
- Review all instances and open
demo-placement-decision-1
- Open the YAML view and at the end, in the
status
section you will see the allowed clusters for thisPlacement
object, in this case onlylocal-cluster
.
Ok, now that we have prepared the initial state of our clusters, we can deploy the application:
-
Access your OpenShift console in the Hub cluster.
-
Click the
+
button to add resources. -
Paste the following content:
apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: labels: app.kubernetes.io/managed-by: demo-placement-global name: demo-placement-global namespace: openshift-gitops spec: generators: - matrix: generators: - git: directories: - path: apps/welcome/* repoURL: 'https://github.com/luisarizmendi/openshift-edge-demos.git' revision: main - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: demo-placement requeueAfterSeconds: 180 goTemplate: true goTemplateOptions: - missingkey=error template: metadata: labels: app.kubernetes.io/managed-by: demo-placement-global name: '{{.path.basename}}-{{.name}}' spec: destination: namespace: '{{.path.basename}}' server: '{{.server}}' project: demo-placement source: helm: valueFiles: - values.yaml - environment/values-{{.name}}.yaml path: '{{.path.path}}' repoURL: 'https://github.com/luisarizmendi/openshift-edge-demos.git' targetRevision: main syncPolicy: automated: prune: true selfHeal: true
The ApplicationSet
object above uses two generators, similar to the previous demo section. The first is a "Git generator" that generates Application
manifests based on directories in a Git repository. The second is the clusterDecisionResource
generator, which, as previously mentioned, generates an intermediate object (in this case called PlacementDecision
) that determines the target clusters.
It also selects the Placement
created earlier by adding a matching label cluster.open-cluster-management.io/placement: demo-placement
requirement.
You will see how the "hello" APP is deployed on the Cloud OpenShift (local-cluster
).
Imagine that we, or any external system, decide that the deployment of the "Hello" APP must be done in the Edge clusters, not in the Cloud.
To modify the placement behavior, we can patch the corresponding ClusterClaim
objects with the new values. In this demo, we will do it manually by opening the objects in the OpenShift console.
Let's start with the edge-1
cluster:
- Access your OpenShift console in the edge-1.
- Go to "Home > API Explorer" and look for "ClusterClaim" objects.
- Review all instances and open
demo.status
. - Open the YAML view and change the value from "bad" to "good".
Now, two different things may happen:
You go to the Argo CD console and see how, after some seconds, the "hello" APP is also deployed on the edge-1
cluster and, if you wait a little longer, the APP will be deleted from the Cloud cluster. Why, if we still haven't configured the ClusterClaim
? Remember that we configured in the Placement
manifest a prioritizerPolicy
that will select the cluster with less CPU and memory used, and we set the maximum number of APP deployments to one using the numberOfClusters
key.
NOTE
This will only happen if your edge cluster has more available resources than the hub cluster, which is typical if you deployed two clusters with the same resources and installed ACM and Argo CD in one of them.
If that didn't happen (i.e., the edge-1
cluster has less available CPU and memory than the Hub cluster), you can still remove the "hello" APP from the Hub cluster so it's installed in the edge-1
cluster:
- Access your OpenShift console in the Hub Cluster.
- Go to "Home > API Explorer" and look for "ClusterClaim" objects.
- Review all instances and open
demo.status
. - Open the YAML view and change the value from "good" to "bad".
After a few seconds, the "hello" APP will be deleted from the Cloud cluster and deployed on edge-1
.
Once you have finished moving the app around your clusters, you can delete the Application
and ApplicationSet
objects:
-
Access your OpenShift console in the Hub cluster.
-
Click the
+
button to add resources. -
Paste the following content:
apiVersion: batch/v1 kind: Job metadata: generateName: cleanup-demo-placement-global- namespace: openshift-gitops spec: template: spec: serviceAccountName: openshift-gitops-argocd-application-controller containers: - name: delete-apps image: openshift/origin-cli:latest command: ["oc"] args: ["delete", "applications", "-n", "openshift-gitops", "-l", "app.kubernetes.io/managed-by=demo-placement-global"] - name: delete-appsets image: openshift/origin-cli:latest command: ["oc"] args: ["delete", "applicationsets", "-n", "openshift-gitops", "-l", "app.kubernetes.io/managed-by=demo-placement-global"] restartPolicy: Never
You can experiment with dynamic assignment. For example, you can deploy a high CPU or memory demanding APP along with your "hello" APP and observe how ACM will reassign the "hello" APP to another cluster.
You can also try to move this approach into a GitOps model where the ClusterClaim
manifest that control which cluster are available for the APP placement is located in a Git repository, similar to what was proposed in the "Going beyond" of the first demo section.
But ff you want to delve deeper, you can explore the "Extensible scheduling" capability of the Placement API. You can fork the example add-on for extensible scheduling repository and modify the code to assign scores based on criteria more relevant to an edge use case than CPU and memory availability.
For instance, referring back to the example that illustrates the "Challenge" we are trying to address, you can develop code that checks an external service monitoring the latency between the clients and the OpenShift clusters. This way, the APP placement can be based on that information. For example, you could deploy the APP in edge clusters only if the latency between the clients and the Cloud cluster is "high," or decide in which edge cluster to deploy based on selecting the one with the least latency to the final clients.