diff --git a/docs/guides/kafka/README.md b/docs/guides/kafka/README.md index d03a91fd73..3d5425b77e 100644 --- a/docs/guides/kafka/README.md +++ b/docs/guides/kafka/README.md @@ -22,13 +22,13 @@ aliases: |----------------------------------------------------------------| | Clustering - Combined (shared controller and broker nodes) | | Clustering - Topology (dedicated controllers and broker nodes) | - | Kafka Connect Cluster | -| Connectors | +| Kafka Connect Cluster | +| Connectors | | Custom Docker Image | | Authentication & Authorization | | Persistent Volume | | Custom Volume | -| TLS: using ( [Cert Manager](https://cert-manager.io/docs/) ) | +| TLS: using ( [Cert Manager](https://cert-manager.io/docs/) ) | | Reconfigurable Health Checker | | Externally manageable Auth Secret | | Monitoring with Prometheus & Grafana | @@ -39,10 +39,11 @@ KubeDB supports The following Kafka versions. Supported version are applicable f - `3.3.2` - `3.4.1` - `3.5.1` +- `3.5.2` - `3.6.0` - `3.6.1` -> The listed KafkaVersions are tested and provided as a part of the installation process (ie. catalog chart), but you are open to create your own [KafkaVersion](/docs/guides/kafka/concepts/catalog.md) object with your custom Kafka image. +> The listed KafkaVersions are tested and provided as a part of the installation process (ie. catalog chart), but you are open to create your own [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md) object with your custom Kafka image. ## Lifecycle of Kafka Object diff --git a/docs/guides/kafka/cli/cli.md b/docs/guides/kafka/cli/cli.md index 5160c1e07b..12ade5abc0 100644 --- a/docs/guides/kafka/cli/cli.md +++ b/docs/guides/kafka/cli/cli.md @@ -47,7 +47,7 @@ cat kafka.yaml | kubectl create -f - ```bash $ kubectl get kafka NAME TYPE VERSION STATUS AGE -kafka kubedb.com/v1alpha2 3.4.0 Ready 36m +kafka kubedb.com/v1alpha2 3.6.1 Ready 36m ``` You can also use short-form (`kf`) for kafka CR. @@ -55,7 +55,7 @@ You can also use short-form (`kf`) for kafka CR. ```bash $ kubectl get kf NAME TYPE VERSION STATUS AGE -kafka kubedb.com/v1alpha2 3.4.0 Ready 36m +kafka kubedb.com/v1alpha2 3.6.1 Ready 36m ``` To get YAML of an object, use `--output=yaml` or `-oyaml` flag. Use `-n` flag for referring namespace. @@ -67,7 +67,7 @@ kind: Kafka metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"authSecret":{"name":"kafka-admin-cred"},"enableSSL":true,"healthChecker":{"failureThreshold":3,"periodSeconds":20,"timeoutSeconds":10},"keystoreCredSecret":{"name":"kafka-keystore-cred"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","tls":{"certificates":[{"alias":"server","secretName":"kafka-server-cert"},{"alias":"client","secretName":"kafka-client-cert"}],"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"topology":{"broker":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"broker"},"controller":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"controller"}},"version":"3.4.0"}} + {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"authSecret":{"name":"kafka-admin-cred"},"enableSSL":true,"healthChecker":{"failureThreshold":3,"periodSeconds":20,"timeoutSeconds":10},"keystoreCredSecret":{"name":"kafka-keystore-cred"},"storageType":"Durable","terminationPolicy":"DoNotTerminate","tls":{"certificates":[{"alias":"server","secretName":"kafka-server-cert"},{"alias":"client","secretName":"kafka-client-cert"}],"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"topology":{"broker":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"broker"},"controller":{"replicas":3,"resources":{"limits":{"memory":"1Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"suffix":"controller"}},"version":"3.6.1"}} creationTimestamp: "2023-03-29T07:01:29Z" finalizers: - kubedb.com @@ -136,7 +136,7 @@ spec: storage: 1Gi storageClassName: standard suffix: controller - version: 3.4.0 + version: 3.6.1 status: conditions: - lastTransitionTime: "2023-03-29T07:01:29Z" @@ -181,7 +181,7 @@ $ kubectl get kf kafka -n demo -ojson "kind": "Kafka", "metadata": { "annotations": { - "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"kubedb.com/v1alpha2\",\"kind\":\"Kafka\",\"metadata\":{\"annotations\":{},\"name\":\"kafka\",\"namespace\":\"demo\"},\"spec\":{\"authSecret\":{\"name\":\"kafka-admin-cred\"},\"enableSSL\":true,\"healthChecker\":{\"failureThreshold\":3,\"periodSeconds\":20,\"timeoutSeconds\":10},\"keystoreCredSecret\":{\"name\":\"kafka-keystore-cred\"},\"storageType\":\"Durable\",\"terminationPolicy\":\"DoNotTerminate\",\"tls\":{\"certificates\":[{\"alias\":\"server\",\"secretName\":\"kafka-server-cert\"},{\"alias\":\"client\",\"secretName\":\"kafka-client-cert\"}],\"issuerRef\":{\"apiGroup\":\"cert-manager.io\",\"kind\":\"Issuer\",\"name\":\"kafka-ca-issuer\"}},\"topology\":{\"broker\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"broker\"},\"controller\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"controller\"}},\"version\":\"3.4.0\"}}\n" + "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"kubedb.com/v1alpha2\",\"kind\":\"Kafka\",\"metadata\":{\"annotations\":{},\"name\":\"kafka\",\"namespace\":\"demo\"},\"spec\":{\"authSecret\":{\"name\":\"kafka-admin-cred\"},\"enableSSL\":true,\"healthChecker\":{\"failureThreshold\":3,\"periodSeconds\":20,\"timeoutSeconds\":10},\"keystoreCredSecret\":{\"name\":\"kafka-keystore-cred\"},\"storageType\":\"Durable\",\"terminationPolicy\":\"DoNotTerminate\",\"tls\":{\"certificates\":[{\"alias\":\"server\",\"secretName\":\"kafka-server-cert\"},{\"alias\":\"client\",\"secretName\":\"kafka-client-cert\"}],\"issuerRef\":{\"apiGroup\":\"cert-manager.io\",\"kind\":\"Issuer\",\"name\":\"kafka-ca-issuer\"}},\"topology\":{\"broker\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"broker\"},\"controller\":{\"replicas\":3,\"resources\":{\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"1Gi\"}},\"storage\":{\"accessModes\":[\"ReadWriteOnce\"],\"resources\":{\"requests\":{\"storage\":\"1Gi\"}},\"storageClassName\":\"standard\"},\"suffix\":\"controller\"}},\"version\":\"3.6.1\"}}\n" }, "creationTimestamp": "2023-03-29T07:01:29Z", "finalizers": [ @@ -282,7 +282,7 @@ $ kubectl get kf kafka -n demo -ojson "suffix": "controller" } }, - "version": "3.4.0" + "version": "3.6.1" }, "status": { "conditions": [ @@ -342,15 +342,15 @@ demo pod/kafka-broker-1 1/1 Running 0 45m 10.24 demo pod/kafka-broker-2 1/1 Running 0 45m 10.244.0.57 kind-control-plane demo pod/kafka-controller-0 1/1 Running 0 45m 10.244.0.51 kind-control-plane demo pod/kafka-controller-1 1/1 Running 0 45m 10.244.0.55 kind-control-plane -demo pod/kafka-controller-2 1/1 Running 3 (45m ago) 45m 10.244.0.58 kind-control-plane +demo pod/kafka-controller-2 1/1 Running 0 45m 10.244.0.58 kind-control-plane NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR demo service/kafka-broker ClusterIP None 9092/TCP,29092/TCP 46m app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=broker demo service/kafka-controller ClusterIP None 9093/TCP 46m app.kubernetes.io/instance=kafka,app.kubernetes.io/managed-by=kubedb.com,app.kubernetes.io/name=kafkas.kubedb.com,kubedb.com/role=controller NAMESPACE NAME READY AGE CONTAINERS IMAGES -demo statefulset.apps/kafka-broker 3/3 45m kafka docker.io/kubedb/kafka-kraft:3.4.0@sha256:f059db2929e3cfe388f50e82e168a9ce94b012e413e056eda2838df48632048a -demo statefulset.apps/kafka-controller 3/3 45m kafka docker.io/kubedb/kafka-kraft:3.4.0@sha256:f059db2929e3cfe388f50e82e168a9ce94b012e413e056eda2838df48632048a +demo statefulset.apps/kafka-broker 3/3 45m kafka ghcr.io/appscode-images/kafka-kraft:3.6.1@sha256:e251d3c0ceee0db8400b689e42587985034852a8a6c81b5973c2844e902e6d11 +demo statefulset.apps/kafka-controller 3/3 45m kafka ghcr.io/appscode-images/kafka-kraft:3.6.1@sha256:e251d3c0ceee0db8400b689e42587985034852a8a6c81b5973c2844e902e6d11 NAMESPACE NAME TYPE VERSION AGE demo appbinding.appcatalog.appscode.com/kafka kubedb.com/kafka 3.4.0 45m diff --git a/docs/guides/kafka/clustering/combined-cluster/index.md b/docs/guides/kafka/clustering/combined-cluster/index.md index 2f55032cb3..f12316702e 100644 --- a/docs/guides/kafka/clustering/combined-cluster/index.md +++ b/docs/guides/kafka/clustering/combined-cluster/index.md @@ -37,7 +37,7 @@ demo Active 9s ## Create Standalone Kafka Cluster -Here, we are going to create a standalone (ie. `replicas: 1`) Kafka cluster in Kraft mode. For this demo, we are going to provision kafka version `3.3.2`. To learn more about Kafka CR, visit [here](/docs/guides/kafka/concepts/kafka.md). visit [here](/docs/guides/kafka/concepts/catalog.md) to learn more about KafkaVersion CR. +Here, we are going to create a standalone (i.e. `replicas: 1`) Kafka cluster in Kraft mode. For this demo, we are going to provision kafka version `3.6.1`. To learn more about Kafka CR, visit [here](/docs/guides/kafka/concepts/kafka.md). visit [here](/docs/guides/kafka/concepts/kafkaversion.md) to learn more about KafkaVersion CR. ```yaml apiVersion: kubedb.com/v1alpha2 @@ -47,7 +47,7 @@ metadata: namespace: demo spec: replicas: 1 - version: 3.3.2 + version: 3.6.1 storage: accessModes: - ReadWriteOnce @@ -71,12 +71,12 @@ Watch the bootstrap progress: ```bash $ kubectl get kf -n demo -w NAME TYPE VERSION STATUS AGE -kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 8s -kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 14s -kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 35s -kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 35s -kafka-standalone kubedb.com/v1alpha2 3.3.2 Provisioning 36s -kafka-standalone kubedb.com/v1alpha2 3.3.2 Ready 41s +kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 8s +kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 14s +kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 35s +kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 35s +kafka-standalone kubedb.com/v1alpha2 3.6.1 Provisioning 36s +kafka-standalone kubedb.com/v1alpha2 3.6.1 Ready 41s ``` Hence, the cluster is ready to use. @@ -94,7 +94,7 @@ NAME READY AGE statefulset.apps/kafka-standalone 1/1 8m56s NAME TYPE VERSION AGE -appbinding.appcatalog.appscode.com/kafka-standalone kubedb.com/kafka 3.3.2 8m56s +appbinding.appcatalog.appscode.com/kafka-standalone kubedb.com/kafka 3.6.1 8m56s NAME TYPE DATA AGE secret/kafka-standalone-admin-cred kubernetes.io/basic-auth 2 8m59s @@ -116,7 +116,7 @@ metadata: namespace: demo spec: replicas: 3 - version: 3.3.2 + version: 3.6.1 storage: accessModes: - ReadWriteOnce @@ -139,12 +139,12 @@ Watch the bootstrap progress: ```bash $ kubectl get kf -n demo -w -kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 9s -kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 14s -kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 18s -kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 2m6s -kafka-multinode kubedb.com/v1alpha2 3.3.2 Provisioning 2m8s -kafka-multinode kubedb.com/v1alpha2 3.3.2 Ready 2m14s +kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 9s +kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 14s +kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 18s +kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 2m6s +kafka-multinode kubedb.com/v1alpha2 3.6.1 Provisioning 2m8s +kafka-multinode kubedb.com/v1alpha2 3.6.1 Ready 2m14s ``` Hence, the cluster is ready to use. @@ -164,7 +164,7 @@ NAME READY AGE statefulset.apps/kafka-multinode 3/3 6m2s NAME TYPE VERSION AGE -appbinding.appcatalog.appscode.com/kafka-multinode kubedb.com/kafka 3.3.2 6m2s +appbinding.appcatalog.appscode.com/kafka-multinode kubedb.com/kafka 3.6.1 6m2s NAME TYPE DATA AGE secret/kafka-multinode-admin-cred kubernetes.io/basic-auth 2 6m7s @@ -310,6 +310,6 @@ $ kubectl delete namespace demo - Deploy [dedicated topology cluster](/docs/guides/kafka/clustering/topology-cluster/index.md) for Apache Kafka - Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). - Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). -- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/catalog.md). +- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/kafkaversion.md). - Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). \ No newline at end of file diff --git a/docs/guides/kafka/clustering/topology-cluster/index.md b/docs/guides/kafka/clustering/topology-cluster/index.md index 0d1cddfc13..1ed4a2fd9c 100644 --- a/docs/guides/kafka/clustering/topology-cluster/index.md +++ b/docs/guides/kafka/clustering/topology-cluster/index.md @@ -1,11 +1,11 @@ --- title: Kafka Topology Cluster menu: -docs_{{ .version }}: -identifier: kf-topology-cluster -name: Topology Cluster -parent: kf-clustering -weight: 20 + docs_{{ .version }}: + identifier: kf-topology-cluster + name: Topology Cluster + parent: kf-clustering + weight: 20 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -80,7 +80,7 @@ issuer.cert-manager.io/kafka-ca-issuer created ### Provision TLS secure Kafka -For this demo, we are going to provision kafka version `3.3.2` with 3 controllers and 3 brokers. To learn more about Kafka CR, visit [here](/docs/guides/kafka/concepts/kafka.md). visit [here](/docs/guides/kafka/concepts/catalog.md) to learn more about KafkaVersion CR. +For this demo, we are going to provision kafka version `3.6.1` with 3 controllers and 3 brokers. To learn more about Kafka CR, visit [here](/docs/guides/kafka/concepts/kafka.md). visit [here](/docs/guides/kafka/concepts/kafkaversion.md) to learn more about KafkaVersion CR. ```yaml apiVersion: kubedb.com/v1alpha2 @@ -89,7 +89,7 @@ metadata: name: kafka-prod namespace: demo spec: - version: 3.3.2 + version: 3.6.1 enableSSL: true tls: issuerRef: @@ -131,10 +131,10 @@ Watch the bootstrap progress: ```bash $ kubectl get kf -n demo -w NAME TYPE VERSION STATUS AGE -kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 6s -kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 14s -kafka-prod kubedb.com/v1alpha2 3.3.2 Provisioning 50s -kafka-prod kubedb.com/v1alpha2 3.3.2 Ready 68s +kafka-prod kubedb.com/v1alpha2 3.6.1 Provisioning 6s +kafka-prod kubedb.com/v1alpha2 3.6.1 Provisioning 14s +kafka-prod kubedb.com/v1alpha2 3.6.1 Provisioning 50s +kafka-prod kubedb.com/v1alpha2 3.6.1 Ready 68s ``` Hence, the cluster is ready to use. @@ -159,7 +159,7 @@ statefulset.apps/kafka-prod-broker 3/3 4m10s statefulset.apps/kafka-prod-controller 3/3 4m8s NAME TYPE VERSION AGE -appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.3.2 4m8s +appbinding.appcatalog.appscode.com/kafka-prod kubedb.com/kafka 3.6.1 4m8s NAME TYPE DATA AGE secret/kafka-prod-admin-cred kubernetes.io/basic-auth 2 4m14s @@ -313,6 +313,6 @@ $ kubectl delete namespace demo - Deploy [dedicated topology cluster](/docs/guides/kafka/clustering/topology-cluster/index.md) for Apache Kafka - Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). - Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). -- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/catalog.md). +- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/kafkaversion.md). - Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). \ No newline at end of file diff --git a/docs/guides/kafka/concepts/appbinding.md b/docs/guides/kafka/concepts/appbinding.md index 41344d365e..21373990e3 100644 --- a/docs/guides/kafka/concepts/appbinding.md +++ b/docs/guides/kafka/concepts/appbinding.md @@ -5,7 +5,7 @@ menu: identifier: kf-appbinding-concepts name: AppBinding parent: kf-concepts-kafka - weight: 21 + weight: 30 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -34,7 +34,7 @@ kind: AppBinding metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"enableSSL":true,"monitor":{"agent":"prometheus.io/operator","prometheus":{"exporter":{"port":9091},"serviceMonitor":{"interval":"10s","labels":{"release":"prometheus"}}}},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","tls":{"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"version":"3.4.0"}} + {"apiVersion":"kubedb.com/v1alpha2","kind":"Kafka","metadata":{"annotations":{},"name":"kafka","namespace":"demo"},"spec":{"enableSSL":true,"monitor":{"agent":"prometheus.io/operator","prometheus":{"exporter":{"port":9091},"serviceMonitor":{"interval":"10s","labels":{"release":"prometheus"}}}},"replicas":3,"storage":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"standard"},"storageType":"Durable","terminationPolicy":"WipeOut","tls":{"issuerRef":{"apiGroup":"cert-manager.io","kind":"Issuer","name":"kafka-ca-issuer"}},"version":"3.6.1"}} creationTimestamp: "2023-03-27T08:04:43Z" generation: 1 labels: @@ -70,7 +70,7 @@ spec: tlsSecret: name: kafka-client-cert type: kubedb.com/kafka - version: 3.4.0 + version: 3.6.1 ``` Here, we are going to describe the sections of an `AppBinding` crd. diff --git a/docs/guides/kafka/concepts/connectcluster.md b/docs/guides/kafka/concepts/connectcluster.md new file mode 100644 index 0000000000..d24b5d5a27 --- /dev/null +++ b/docs/guides/kafka/concepts/connectcluster.md @@ -0,0 +1,356 @@ +--- +title: ConnectCluster CRD +menu: + docs_{{ .version }}: + identifier: kf-connectcluster-concepts + name: ConnectCluster + parent: kf-concepts-kafka + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# ConnectCluster + +## What is ConnectCluster + +`ConnectCluster` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [ConnectCluster](https://kafka.apache.org/) in a Kubernetes native way. You only need to describe the desired configuration in a `ConnectCluster` object, and the KubeDB operator will create Kubernetes objects in the desired state for you. + +## ConnectCluster Spec + +As with all other Kubernetes objects, a ConnectCluster needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example ConnectCluster object. + +```yaml +apiVersion: kafka.kubedb.com/v1alpha1 +kind: ConnectCluster +metadata: + name: connectcluster + namespace: demo +spec: + version: 3.6.1 + healthChecker: + failureThreshold: 3 + periodSeconds: 20 + timeoutSeconds: 10 + disableSecurity: false + authSecret: + name: connectcluster-auth + enableSSL: true + keystoreCredSecret: + name: connectcluster-keystore-cred + tls: + issuerRef: + apiGroup: cert-manager.io + kind: Issuer + name: connectcluster-ca-issuer + certificates: + - alias: server + secretName: connectcluster-server-cert + - alias: client + secretName: connectcluster-client-cert + configSecret: + name: custom-connectcluster-config + replicas: 3 + connectorPlugins: + - gcs-0.13.0 + - mongodb-1.11.0 + - mysql-2.4.2.final + - postgres-2.4.2.final + - s3-2.15.0 + - jdbc-2.6.1.final + kafkaRef: + name: kafka + namespace: demo + podTemplate: + metadata: + annotations: + passMe: ToDatabasePod + labels: + thisLabel: willGoToPod + controller: + annotations: + passMe: ToStatefulSet + labels: + thisLabel: willGoToSts + monitor: + agent: prometheus.io/operator + prometheus: + exporter: + port: 56790 + serviceMonitor: + labels: + release: prometheus + interval: 10s + terminationPolicy: WipeOut +``` + +### spec.version + +`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md) CR where the docker images are specified. Currently, when you install KubeDB, it creates the following `KafkaVersion` resources, + +- `3.3.2` +- `3.4.1` +- `3.5.1` +- `3.5.2` +- `3.6.0` +- `3.6.1` + +### spec.replicas + +`spec.replicas` the number of worker nodes in ConnectCluster. + +KubeDB uses `PodDisruptionBudget` to ensure that majority of these replicas are available during [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions) so that quorum is maintained. + +### spec.disableSecurity + +`spec.disableSecurity` is an optional field that specifies whether to disable all kind of security features like basic authentication and tls. The default value of this field is `false`. + +### spec.connectorPlugins + +`spec.connectorPlugins` is an optional field that specifies the list of connector plugins to be installed in the ConnectCluster worker node. The field takes a list of strings where each string represents the name of the KafkaConnectorVersion CR. To learn more about KafkaConnectorVersion CR, visit [here](/docs/guides/kafka/concepts/kafkaconnectorversion.md). +```yaml +connectorPlugins: + - + - +``` + +### spec.kafkaRef + +`spec.kafkaRef` is a required field that specifies the name and namespace of the appbinding for `Kafka` object that the `ConnectCluster` object is associated with. +```yaml +kafkaRef: + name: + namespace: +``` + +### spec.configSecret + +`spec.configSecret` is an optional field that specifies the name of the secret containing the custom configuration for the ConnectCluster. The secret should contain a key `config.properties` which contains the custom configuration for the ConnectCluster. The default value of this field is `nil`. +```yaml +configSecret: + name: +``` + +### spec.authSecret + +`spec.authSecret` is an optional field that points to a Secret used to hold credentials for `ConnectCluster` username and password. If not set, KubeDB operator creates a new Secret `{connectcluster-object-name}-connect-cred` for storing the username and password for each ConnectCluster object. + +We can use this field in 3 mode. + +1. Using an external secret. In this case, You need to create an auth secret first with required fields, then specify the secret name when creating the ConnectCluster object using `spec.authSecret.name` & set `spec.authSecret.externallyManaged` to true. +```yaml +authSecret: + name: + externallyManaged: true +``` + +2. Specifying the secret name only. In this case, You need to specify the secret name when creating the Kafka object using `spec.authSecret.name`. `externallyManaged` is by default false. +```yaml +authSecret: + name: +``` + +3. Let KubeDB do everything for you. In this case, no work for you. + +AuthSecret contains a `user` key and a `password` key which contains the `username` and `password` respectively for ConnectCluster user. + +Example: + +```bash +$ kubectl create secret generic kcc-auth -n demo \ +--from-literal=username=jhon-doe \ +--from-literal=password=6q8u_2jMOW-OOZXk +secret "kcc-auth" created +``` + +```yaml +apiVersion: v1 +data: + password: NnE4dV8yak1PVy1PT1pYaw== + username: amhvbi1kb2U= +kind: Secret +metadata: + name: kcc-auth + namespace: demo +type: Opaque +``` + +Secrets provided by users are not managed by KubeDB, and therefore, won't be modified or garbage collected by the KubeDB operator (version 0.13.0 and higher). + +### spec.enableSSL + +`spec.enableSSL` is an `optional` field that specifies whether to enable TLS to HTTP layer. The default value of this field is `false`. + +```yaml +spec: + enableSSL: true +``` + +### spec.keystoreCredSecret + +`spec.keystoreCredSecret` is an `optional` field that specifies the name of the secret containing the keystore credentials for the ConnectCluster. The secret should contain three keys `ssl.keystore.password`, `ssl.key.password` and `ssl.keystore.password`. The default value of this field is `nil`. + +```yaml +spec: + keystoreCredSecret: + name: +``` + +### spec.tls + +`spec.tls` specifies the TLS/SSL configurations. The KubeDB operator supports TLS management by using the [cert-manager](https://cert-manager.io/). Currently, the operator only supports the `PKCS#8` encoded certificates. + +```yaml +spec: + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: kcc-issuer + certificates: + - alias: server + privateKey: + encoding: PKCS8 + secretName: kcc-client-cert + subject: + organizations: + - kubedb + - alias: http + privateKey: + encoding: PKCS8 + secretName: kcc-server-cert + subject: + organizations: + - kubedb +``` + +The `spec.tls` contains the following fields: + +- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for ConnectCluster. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. + - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. + - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`. + - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced. + +- `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields: + - `alias` - represents the identifier of the certificate. It has the following possible value: + - `server` - is used for the server certificate configuration. + - `client` - is used for the client certificate configuration. + + - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates. + + - `subject` - specifies an `X.509` distinguished name (DN). It has the following configurable fields: + - `organizations` ( `[]string` | `nil` ) - is a list of organization names. + - `organizationalUnits` ( `[]string` | `nil` ) - is a list of organization unit names. + - `countries` ( `[]string` | `nil` ) - is a list of country names (ie. Country Codes). + - `localities` ( `[]string` | `nil` ) - is a list of locality names. + - `provinces` ( `[]string` | `nil` ) - is a list of province names. + - `streetAddresses` ( `[]string` | `nil` ) - is a list of street addresses. + - `postalCodes` ( `[]string` | `nil` ) - is a list of postal codes. + - `serialNumber` ( `string` | `""` ) is a serial number. + + For more details, visit [here](https://golang.org/pkg/crypto/x509/pkix/#Name). + + - `duration` ( `string` | `""` ) - is the period during which the certificate is valid. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as `"300m"`, `"1.5h"` or `"20h45m"`. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". + - `renewBefore` ( `string` | `""` ) - is a specifiable time before expiration duration. + - `dnsNames` ( `[]string` | `nil` ) - is a list of subject alt names. + - `ipAddresses` ( `[]string` | `nil` ) - is a list of IP addresses. + - `uris` ( `[]string` | `nil` ) - is a list of URI Subject Alternative Names. + - `emailAddresses` ( `[]string` | `nil` ) - is a list of email Subject Alternative Names. + + + +### spec.monitor + +ConnectCluster managed by KubeDB can be monitored with Prometheus operator out-of-the-box. To learn more, +- [Monitor Apache with Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md) + +### spec.podTemplate + +KubeDB allows providing a template for pod through `spec.podTemplate`. KubeDB operator will pass the information provided in `spec.podTemplate` to the StatefulSet created for ConnectCluster. + +KubeDB accept following fields to set in `spec.podTemplate:` + +- metadata: + - annotations (pod's annotation) + - labels (pod's labels) +- controller: + - annotations (statefulset's annotation) + - labels (statefulset's labels) +- spec: + - volumes + - initContainers + - containers + - imagePullSecrets + - nodeSelector + - affinity + - serviceAccountName + - schedulerName + - tolerations + - priorityClassName + - priority + - securityContext + - livenessProbe + - readinessProbe + - lifecycle + +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below, + +#### spec.podTemplate.spec.nodeSelector + +`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . + +#### spec.podTemplate.spec.resources + +`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/). + +### spec.serviceTemplates + +You can also provide template for the services created by KubeDB operator for Kafka cluster through `spec.serviceTemplates`. This will allow you to set the type and other properties of the services. + +KubeDB allows following fields to set in `spec.serviceTemplates`: +- `alias` represents the identifier of the service. It has the following possible value: + - `stats` is used for the exporter service identification. +- metadata: + - labels + - annotations +- spec: + - type + - ports + - clusterIP + - externalIPs + - loadBalancerIP + - loadBalancerSourceRanges + - externalTrafficPolicy + - healthCheckNodePort + - sessionAffinityConfig + +See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail. + +### spec.terminationPolicy + +`spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `ConnectCluster` crd or which resources KubeDB should keep or delete when you delete `ConnectCluster` crd. KubeDB provides following four termination policies: + +- Delete +- DoNotTerminate +- WipeOut + +When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the database as long as the `spec.terminationPolicy` is set to `DoNotTerminate`. + +## spec.healthChecker +It defines the attributes for the health checker. +- `spec.healthChecker.periodSeconds` specifies how often to perform the health check. +- `spec.healthChecker.timeoutSeconds` specifies the number of seconds after which the probe times out. +- `spec.healthChecker.failureThreshold` specifies minimum consecutive failures for the healthChecker to be considered failed. +- `spec.healthChecker.disableWriteCheck` specifies whether to disable the writeCheck or not. + +Know details about KubeDB Health checking from this [blog post](https://appscode.com/blog/post/kubedb-health-checker/). + +## Next Steps + +- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/guides/kafka/README.md). +- Monitor your ConnectCluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). +- Detail concepts of [KafkaConnectorVersion object](/docs/guides/kafka/concepts/kafkaconnectorversion.md). +- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/concepts/kafka.md b/docs/guides/kafka/concepts/kafka.md index beb578b833..d8b2e4ccc3 100644 --- a/docs/guides/kafka/concepts/kafka.md +++ b/docs/guides/kafka/concepts/kafka.md @@ -98,21 +98,24 @@ spec: agent: prometheus.io/operator prometheus: exporter: - port: 9091 + port: 56790 serviceMonitor: labels: release: prometheus interval: 10s - version: 3.4.0 + version: 3.6.1 ``` ### spec.version -`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/guides/kafka/concepts/catalog.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Kafka` resources, +`spec.version` is a required field specifying the name of the [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md) crd where the docker images are specified. Currently, when you install KubeDB, it creates the following `Kafka` resources, -- `3.3.0` - `3.3.2` -- `3.4.0` +- `3.4.1` +- `3.5.1` +- `3.5.2` +- `3.6.0` +- `3.6.1` ### spec.replicas @@ -240,17 +243,15 @@ spec: The `spec.tls` contains the following fields: -- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Elasticsearch. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. +- `tls.issuerRef` - is an `optional` field that references to the `Issuer` or `ClusterIssuer` custom resource object of [cert-manager](https://cert-manager.io/docs/concepts/issuer/). It is used to generate the necessary certificate secrets for Kafka. If the `issuerRef` is not specified, the operator creates a self-signed CA and also creates necessary certificate (valid: 365 days) secrets using that CA. - `apiGroup` - is the group name of the resource that is being referenced. Currently, the only supported value is `cert-manager.io`. - `kind` - is the type of resource that is being referenced. The supported values are `Issuer` and `ClusterIssuer`. - `name` - is the name of the resource ( `Issuer` or `ClusterIssuer` ) that is being referenced. - `tls.certificates` - is an `optional` field that specifies a list of certificate configurations used to configure the certificates. It has the following fields: - `alias` - represents the identifier of the certificate. It has the following possible value: - - `transport` - is used for the transport layer certificate configuration. - - `http` - is used for the HTTP layer certificate configuration. - - `admin` - is used for the admin certificate configuration. Available for the `SearchGuard` and the `OpenDistro` auth-plugins. - - `metrics-exporter` - is used for the metrics-exporter sidecar certificate configuration. + - `server` - is used for the server certificate configuration. + - `client` - is used for the client certificate configuration. - `secretName` - ( `string` | `"-alias-cert"` ) - specifies the k8s secret name that holds the certificates. @@ -310,10 +311,9 @@ KubeDB accept following fields to set in `spec.podTemplate:` - annotations (statefulset's annotation) - labels (statefulset's labels) - spec: - - args - - env - resources - initContainers + - containers - imagePullSecrets - nodeSelector - affinity @@ -327,18 +327,10 @@ KubeDB accept following fields to set in `spec.podTemplate:` - readinessProbe - lifecycle -You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below, +You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below, NB. If `spec.topology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.topology..podTemplate` -#### spec.podTemplate.spec.args - -`spec.podTemplate.spec.args` is an optional field. This can be used to provide additional arguments to database installation. - -#### spec.podTemplate.spec.env - -`spec.podTemplate.spec.env` is an optional field that specifies the environment variables to pass to the Kafka docker image. - #### spec.podTemplate.spec.nodeSelector `spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) . @@ -390,10 +382,10 @@ Know details about KubeDB Health checking from this [blog post](https://appscode ## Next Steps -- Learn how to use KubeDB to run a Apache Kafka cluster [here](/docs/guides/kafka/README.md). +- Learn how to use KubeDB to run Apache Kafka cluster [here](/docs/guides/kafka/README.md). - Deploy [dedicated topology cluster](/docs/guides/kafka/clustering/topology-cluster/index.md) for Apache Kafka - Deploy [combined cluster](/docs/guides/kafka/clustering/combined-cluster/index.md) for Apache Kafka - Monitor your Kafka cluster with KubeDB using [`out-of-the-box` Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). -- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/catalog.md). +- Detail concepts of [KafkaVersion object](/docs/guides/kafka/concepts/kafkaversion.md). - Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/kafka/concepts/kafkaconnectorversion.md b/docs/guides/kafka/concepts/kafkaconnectorversion.md new file mode 100644 index 0000000000..b8431ab471 --- /dev/null +++ b/docs/guides/kafka/concepts/kafkaconnectorversion.md @@ -0,0 +1,90 @@ +--- +title: KafkaConnectorVersion CRD +menu: + docs_{{ .version }}: + identifier: kf-kafkaconnectorversion-concepts + name: KafkaConnectorVersion + parent: kf-concepts-kafka + weight: 25 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaConnectorVersion + +## What is KafkaConnectorVersion + +`KafkaConnectorVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for install Connector plugins to ConnectCluster worker node with KubeDB in a Kubernetes native way. + +When you install KubeDB, a `KafkaConnectorVersion` custom resource will be created automatically for every supported Kafka Connector versions. You have to specify list of `KafkaConnectorVersion` CR names in `spec.connectorPlugins` field of [ConnectCluster](/docs/guides/kafka/concepts/kafka.md) cr. Then, KubeDB will use the docker images specified in the `KafkaConnectorVersion` cr to install your connector plugins. + +Using a separate CR for specifying respective docker images and policies independent of KubeDB operator. This will also allow the users to use a custom image for the connector plugins. + +## KafkaConnectorVersion Spec + +As with all other Kubernetes objects, a KafkaVersion needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. + +```yaml +apiVersion: catalog.kubedb.com/v1alpha1 +kind: KafkaConnectorVersion +metadata: + annotations: + meta.helm.sh/release-name: kubedb + meta.helm.sh/release-namespace: kubedb + creationTimestamp: "2024-05-02T06:38:17Z" + generation: 1 + labels: + app.kubernetes.io/instance: kubedb + app.kubernetes.io/managed-by: Helm + app.kubernetes.io/name: kubedb-catalog + app.kubernetes.io/version: v2024.4.27 + helm.sh/chart: kubedb-catalog-v2024.4.27 + name: mongodb-1.11.0 + resourceVersion: "2873" + uid: a5808f31-9d27-4979-8a7d-f3357dbba6ba +spec: + connectorPlugin: + image: ghcr.io/appscode-images/kafka-connector-mongodb:1.11.0 + securityContext: + runAsUser: 1001 + type: MongoDB + version: 1.11.0 +``` + +### metadata.name + +`metadata.name` is a required field that specifies the name of the `KafkaConnectorVersion` CR. You have to specify this name in `spec.connectorPlugins` field of ConnectCluster CR. + +We follow this convention for naming KafkaConnectorVersion CR: + +- Name format: `{Plugin-Type}-{version}` + +### spec.version + +`spec.version` is a required field that specifies the original version of Connector plugin that has been used to build the docker image specified in `spec.connectorPlugin.image` field. + +### spec.deprecated + +`spec.deprecated` is an optional field that specifies whether the docker images specified here is supported by the current KubeDB operator. + +The default value of this field is `false`. If `spec.deprecated` is set to `true`, KubeDB operator will skip processing this CRD object and will add a event to the CRD object specifying that the DB version is deprecated. + +### spec.connectorPlugin.image + +`spec.connectorPlugin.image` is a required field that specifies the docker image which will be used to install connector plugin by KubeDB operator. + +```bash +helm upgrade -i kubedb oci://ghcr.io/appscode-charts/kubedb \ + --namespace kubedb --create-namespace \ + --set additionalPodSecurityPolicies[0]=custom-db-policy \ + --set additionalPodSecurityPolicies[1]=custom-snapshotter-policy \ + --set-file global.license=/path/to/the/license.txt \ + --wait --burst-limit=10000 --debug +``` + +## Next Steps + +- Learn about ConnectCluster CRD [here](/docs/guides/kafka/concepts/connectcluster.md). +- Deploy your first ConnectCluster with KubeDB by following the guide [here](/docs/guides/kafka/quickstart/overview/connectcluster/index.md). diff --git a/docs/guides/kafka/concepts/catalog.md b/docs/guides/kafka/concepts/kafkaversion.md similarity index 70% rename from docs/guides/kafka/concepts/catalog.md rename to docs/guides/kafka/concepts/kafkaversion.md index 353ed094ee..2bd2e61b96 100644 --- a/docs/guides/kafka/concepts/catalog.md +++ b/docs/guides/kafka/concepts/kafkaversion.md @@ -5,7 +5,7 @@ menu: identifier: kf-catalog-concepts name: KafkaVersion parent: kf-concepts-kafka - weight: 15 + weight: 20 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -18,9 +18,9 @@ section_menu_id: guides `KafkaVersion` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration to specify the docker images to be used for [Kafka](https://kafka.apache.org) database deployed with KubeDB in a Kubernetes native way. -When you install KubeDB, a `KafkaVersion` custom resource will be created automatically for every supported Kafka versions. You have to specify the name of `KafkaVersion` crd in `spec.version` field of [Kafka](/docs/guides/kafka/concepts/kafka.md) crd. Then, KubeDB will use the docker images specified in the `KafkaVersion` crd to create your expected database. +When you install KubeDB, a `KafkaVersion` custom resource will be created automatically for every supported Kafka versions. You have to specify the name of `KafkaVersion` CR in `spec.version` field of [Kafka](/docs/guides/kafka/concepts/kafka.md) crd. Then, KubeDB will use the docker images specified in the `KafkaVersion` CR to create your expected database. -Using a separate crd for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database. +Using a separate CRD for specifying respective docker images, and pod security policy names allow us to modify the images, and policies independent of KubeDB operator.This will also allow the users to use a custom image for the database. ## KafkaVersion Spec @@ -31,40 +31,42 @@ apiVersion: catalog.kubedb.com/v1alpha1 kind: KafkaVersion metadata: annotations: - meta.helm.sh/release-name: kubedb-catalog + meta.helm.sh/release-name: kubedb meta.helm.sh/release-namespace: kubedb - creationTimestamp: "2023-03-23T10:15:24Z" - generation: 2 + creationTimestamp: "2024-05-02T06:38:17Z" + generation: 1 labels: - app.kubernetes.io/instance: kubedb-catalog + app.kubernetes.io/instance: kubedb app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: kubedb-catalog - app.kubernetes.io/version: v2023.02.28 - helm.sh/chart: kubedb-catalog-v2023.02.28 - name: 3.4.0 - resourceVersion: "472767" - uid: 36a167a3-5218-4e32-b96d-d6b5b0c86125 + app.kubernetes.io/version: v2024.4.27 + helm.sh/chart: kubedb-catalog-v2024.4.27 + name: 3.6.1 + resourceVersion: "2881" + uid: 778fb80c-b37a-4ac6-bfaa-fec83e5f49c7 spec: connectCluster: image: ghcr.io/appscode-images/kafka-connect-cluster:3.6.1 + cruiseControl: + image: ghcr.io/appscode-images/kafka-cruise-control:3.6.1 db: - image: kubedb/kafka-kraft:3.4.0 + image: ghcr.io/appscode-images/kafka-kraft:3.6.1 podSecurityPolicies: databasePolicyName: kafka-db - version: 3.4.0 - cruiseControl: - image: ghcr.io/kubedb/cruise-control:3.4.0 + securityContext: + runAsUser: 1001 + version: 3.6.1 ``` ### metadata.name -`metadata.name` is a required field that specifies the name of the `KafkaVersion` crd. You have to specify this name in `spec.version` field of [Kafka](/docs/guides/kafka/concepts/kafka.md) crd. +`metadata.name` is a required field that specifies the name of the `KafkaVersion` CR. You have to specify this name in `spec.version` field of [Kafka](/docs/guides/kafka/concepts/kafka.md) CR. -We follow this convention for naming KafkaVersion crd: +We follow this convention for naming KafkaVersion CR: - Name format: `{Original Kafka image version}-{modification tag}` -We use official Apache Kafka release tar files to build docker images for supporting Kafka versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use KafkaVersion crd with the highest modification tag to enjoy the latest features. +We use official Apache Kafka release tar files to build docker images for supporting Kafka versions and re-tag the image with v1, v2 etc. modification tag when there's any. An image with higher modification tag will have more features than the images with lower modification tag. Hence, it is recommended to use KafkaVersion CR with the highest modification tag to enjoy the latest features. ### spec.version @@ -80,6 +82,14 @@ The default value of this field is `false`. If `spec.deprecated` is set to `true `spec.db.image` is a required field that specifies the docker image which will be used to create StatefulSet by KubeDB operator to create expected Kafka database. +### spec.cruiseControl.image + +`spec.cruiseControl.image` is a required field that specifies the docker image which will be used to create Deployment by KubeDB operator to create expected Kafka Cruise Control. + +### spec.connectCluster.image + +`spec.connectCluster.image` is a required field that specifies the docker image which will be used to create StatefulSet by KubeDB operator to create expected Kafka Connect Cluster. +