diff --git a/docs/guides/kafka/README.md b/docs/guides/kafka/README.md
index 3d5425b77e..0a37c5f161 100644
--- a/docs/guides/kafka/README.md
+++ b/docs/guides/kafka/README.md
@@ -17,21 +17,26 @@ aliases:
## Supported Kafka Features
-
-| Features |
-|----------------------------------------------------------------|
-| Clustering - Combined (shared controller and broker nodes) |
-| Clustering - Topology (dedicated controllers and broker nodes) |
-| Kafka Connect Cluster |
-| Connectors |
-| Custom Docker Image |
-| Authentication & Authorization |
-| Persistent Volume |
-| Custom Volume |
-| TLS: using ( [Cert Manager](https://cert-manager.io/docs/) ) |
-| Reconfigurable Health Checker |
-| Externally manageable Auth Secret |
-| Monitoring with Prometheus & Grafana |
+| Features |
+|------------------------------------------------------------------------------------|
+| Clustering - Combined (shared controller and broker nodes) |
+| Clustering - Topology (dedicated controllers and broker nodes) |
+| Kafka Connect Cluster |
+| Connectors |
+| Custom Configuration |
+| Automated Version Update |
+| Automatic Vertical Scaling |
+| Automated Horizontal Scaling |
+| Automated Volume Expansion |
+| Custom Docker Image |
+| Authentication & Authorization |
+| Persistent Volume |
+| Custom Volume |
+| TLS: Add, Remove, Update, Rotate ( [Cert Manager](https://cert-manager.io/docs/) ) |
+| Reconfigurable Health Checker |
+| Externally manageable Auth Secret |
+| Monitoring with Prometheus & Grafana |
+| Autoscaling (vertically, volume) |
## Supported Kafka Versions
@@ -55,8 +60,14 @@ ref : https://cacoo.com/diagrams/4PxSEzhFdNJRIbIb/0281B
+## Lifecycle of ConnectCluster Object
+
+
+
+
+
## User Guide
-- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/index.md) with KubeDB Operator.
+- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/kafka/index.md) with KubeDB Operator.
- Kafka Clustering supported by KubeDB
- [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md)
- [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md)
diff --git a/docs/guides/kafka/concepts/connector.md b/docs/guides/kafka/concepts/connector.md
new file mode 100644
index 0000000000..93df7829cc
--- /dev/null
+++ b/docs/guides/kafka/concepts/connector.md
@@ -0,0 +1,74 @@
+---
+title: Connector CRD
+menu:
+ docs_{{ .version }}:
+ identifier: kf-connector-concepts
+ name: Connector
+ parent: kf-concepts-kafka
+ weight: 20
+menu_name: docs_{{ .version }}
+section_menu_id: guides
+---
+
+> New to KubeDB? Please start [here](/docs/README.md).
+
+# Connector
+
+## What is Connector
+
+`Connector` is a Kubernetes `Custom Resource Definitions` (CRD). It provides declarative configuration for [Connector](https://kafka.apache.org/) in a Kubernetes native way. You only need to describe the desired configuration in a `Connector` object, and the KubeDB operator will create Kubernetes objects in the desired state for you.
+
+## Connector Spec
+
+As with all other Kubernetes objects, a Connector needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Connector object.
+
+```yaml
+apiVersion: kafka.kubedb.com/v1alpha1
+kind: Connector
+metadata:
+ name: mongodb-source-connector
+ namespace: demo
+spec:
+ configSecret:
+ name: mongodb-source-config
+ connectClusterRef:
+ name: connectcluster-quickstart
+ namespace: demo
+ terminationPolicy: WipeOut
+```
+
+### spec.configSecret
+
+`spec.configSecret` is a required field that specifies the name of the secret containing the configuration for the Connector. The secret should contain a key `config.properties` which contains the configuration for the Connector.
+```yaml
+spec:
+ configSecret:
+ name:
+```
+
+### spec.connectClusterRef
+
+`spec.connectClusterRef` is a required field that specifies the name and namespace of the `ConnectCluster` object that the `Connector` object is associated with. This is an appbinding reference for `ConnectCluster` object.
+```yaml
+spec:
+ connectClusterRef:
+ name:
+ namespace:
+```
+
+### spec.terminationPolicy
+
+`spec.terminationPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Connector` CR or which resources KubeDB should keep or delete when you delete `ConnectCluster` CR. KubeDB provides following four termination policies:
+
+- Delete
+- DoNotTerminate
+- WipeOut
+
+When `terminationPolicy` is `DoNotTerminate`, KubeDB takes advantage of `ValidationWebhook` feature in Kubernetes 1.9.0 or later clusters to implement `DoNotTerminate` feature. If admission webhook is enabled, `DoNotTerminate` prevents users from deleting the resource as long as the `spec.terminationPolicy` is set to `DoNotTerminate`.
+
+## Next Steps
+
+- Learn how to use KubeDB to run a Apache Kafka Connect cluster [here](/docs/guides/kafka/quickstart/overview/connectcluster/index.md).
+- Detail concepts of [KafkaConnectorVersion object](/docs/guides/kafka/concepts/kafkaconnectorversion.md).
+- Learn to use KubeDB managed Kafka objects using [CLIs](/docs/guides/kafka/cli/cli.md).
+- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).
diff --git a/docs/guides/kafka/concepts/kafkaconnectorversion.md b/docs/guides/kafka/concepts/kafkaconnectorversion.md
index b8431ab471..fa5463a8f9 100644
--- a/docs/guides/kafka/concepts/kafkaconnectorversion.md
+++ b/docs/guides/kafka/concepts/kafkaconnectorversion.md
@@ -5,7 +5,7 @@ menu:
identifier: kf-kafkaconnectorversion-concepts
name: KafkaConnectorVersion
parent: kf-concepts-kafka
- weight: 25
+ weight: 30
menu_name: docs_{{ .version }}
section_menu_id: guides
---
diff --git a/docs/guides/kafka/concepts/kafkaversion.md b/docs/guides/kafka/concepts/kafkaversion.md
index 2bd2e61b96..e18f1aa9bb 100644
--- a/docs/guides/kafka/concepts/kafkaversion.md
+++ b/docs/guides/kafka/concepts/kafkaversion.md
@@ -5,7 +5,7 @@ menu:
identifier: kf-catalog-concepts
name: KafkaVersion
parent: kf-concepts-kafka
- weight: 20
+ weight: 25
menu_name: docs_{{ .version }}
section_menu_id: guides
---
diff --git a/docs/guides/kafka/monitoring/overview.md b/docs/guides/kafka/monitoring/overview.md
index 4f7c537a22..742f6cde7d 100644
--- a/docs/guides/kafka/monitoring/overview.md
+++ b/docs/guides/kafka/monitoring/overview.md
@@ -31,16 +31,16 @@ When a user creates a Kafka crd with `spec.monitor` section configured, KubeDB o
In order to enable monitoring for a database, you have to configure `spec.monitor` section. KubeDB provides following options to configure `spec.monitor` section:
-| Field | Type | Uses |
-| -------------------------------------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
+| Field | Type | Uses |
+|----------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------|
| `spec.monitor.agent` | `Required` | Type of the monitoring agent that will be used to monitor this database. It can be `prometheus.io/builtin` or `prometheus.io/operator`. |
-| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. |
-| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. |
-| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. |
-| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. |
-| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. |
-| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. |
-| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. |
+| `spec.monitor.prometheus.exporter.port` | `Optional` | Port number where the exporter side car will serve metrics. |
+| `spec.monitor.prometheus.exporter.args` | `Optional` | Arguments to pass to the exporter sidecar. |
+| `spec.monitor.prometheus.exporter.env` | `Optional` | List of environment variables to set in the exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.resources` | `Optional` | Resources required by exporter sidecar container. |
+| `spec.monitor.prometheus.exporter.securityContext` | `Optional` | Security options the exporter should run with. |
+| `spec.monitor.prometheus.serviceMonitor.labels` | `Optional` | Labels for `ServiceMonitor` crd. |
+| `spec.monitor.prometheus.serviceMonitor.interval` | `Optional` | Interval at which metrics should be scraped. |
## Sample Configuration
@@ -60,7 +60,7 @@ spec:
name: kafka-ca-issuer
kind: Issuer
replicas: 3
- version: 3.4.0
+ version: 3.6.1
storage:
accessModes:
- ReadWriteOnce
diff --git a/docs/guides/kafka/quickstart/overview/_index.md b/docs/guides/kafka/quickstart/overview/_index.md
index 0f9e72559a..34fbb35929 100644
--- a/docs/guides/kafka/quickstart/overview/_index.md
+++ b/docs/guides/kafka/quickstart/overview/_index.md
@@ -5,6 +5,6 @@ menu:
identifier: kf-overview-kafka
name: Overview
parent: kf-quickstart-kafka
- weight: 15
+ weight: 10
menu_name: docs_{{ .version }}
---
diff --git a/docs/guides/kafka/quickstart/overview/connectcluster/index.md b/docs/guides/kafka/quickstart/overview/connectcluster/index.md
index 285c6e3c2d..96d7af82a8 100644
--- a/docs/guides/kafka/quickstart/overview/connectcluster/index.md
+++ b/docs/guides/kafka/quickstart/overview/connectcluster/index.md
@@ -5,7 +5,7 @@ menu:
identifier: kf-kafka-overview-connectcluster
name: ConnectCluster
parent: kf-overview-kafka
- weight: 10
+ weight: 15
menu_name: docs_{{ .version }}
section_menu_id: guides
---
@@ -39,13 +39,15 @@ demo Active 9s
> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/overview/connectcluster/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/connectcluster/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
-> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/overview/index.md#tips-for-testing).
+> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka Connect Cluster. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/overview/connectcluster/index.md#tips-for-testing).
## Find Available ConnectCluster Versions
When you install the KubeDB operator, it registers a CRD named [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md). ConnectCluster Version is using the KafkaVersion CR to define the specification of ConnectCluster. The installation process comes with a set of tested KafkaVersion objects. Let's check available KafkaVersions by,
```bash
+$ kubectl get kfversion
+
NAME VERSION DB_IMAGE DEPRECATED AGE
3.3.2 3.3.2 ghcr.io/appscode-images/kafka-kraft:3.3.2 24m
3.4.1 3.4.1 ghcr.io/appscode-images/kafka-kraft:3.4.1 24m
@@ -62,9 +64,11 @@ In this tutorial, we will use `3.6.1` KafkaVersion CR to create a Kafka Connect
## Find Available KafkaConnector Versions
-When you install the KubeDB operator, it registers a CRD named [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md). KafkaConnector Version use to load connector-plugins to run ConnectCluster worker node(ex. mongodb-source/sink). The installation process comes with a set of tested KafkaConnectorVersion objects. Let's check available KafkaConnectorVersions by,
+When you install the KubeDB operator, it registers a CRD named [KafkaConnectorVersion](/docs/guides/kafka/concepts/kafkaversion.md). KafkaConnectorVersion use to load connector-plugins to run ConnectCluster worker node(ex. mongodb-source/sink). The installation process comes with a set of tested KafkaConnectorVersion objects. Let's check available KafkaConnectorVersions by,
```bash
+$ kubectl get kcversion
+
NAME VERSION CONNECTOR_IMAGE DEPRECATED AGE
gcs-0.13.0 0.13.0 ghcr.io/appscode-images/kafka-connector-gcs:0.13.0 10m
jdbc-2.6.1.final 2.6.1 ghcr.io/appscode-images/kafka-connector-jdbc:2.6.1.final 10m
@@ -74,8 +78,20 @@ postgres-2.4.2.final 2.4.2 ghcr.io/appscode-images/kafka-connector-postgre
s3-2.15.0 2.15.0 ghcr.io/appscode-images/kafka-connector-s3:2.15.0 10m
```
+
Notice the `DEPRECATED` column. Here, `true` means that this KafkaConnectorVersion is deprecated for the current KubeDB version. KubeDB will not work for deprecated KafkaConnectorVersion. You can also use the short from `kcversion` to check available KafkaConnectorVersions.
+### Details of ConnectorPlugins
+
+| Connector Plugin | Type | Version | Connector Class |
+|----------------------|--------|-------------|------------------------------------------------------------|
+| mongodb-1.11.0 | Source | 1.11.0 | com.mongodb.kafka.connect.MongoSourceConnector |
+| mongodb-1.11.0 | Sink | 1.11.0 | com.mongodb.kafka.connect.MongoSinkConnector |
+| mysql-2.4.2.final | Source | 2.4.2.Final | io.debezium.connector.mysql.MySqlConnector |
+| postgres-2.4.2.final | Source | 2.4.2.Final | io.debezium.connector.postgresql.PostgresConnector |
+| jdbc-2.6.1.final | Sink | 2.6.1.Final | io.debezium.connector.jdbc.JdbcSinkConnector |
+| s3-2.15.0 | Sink | 2.15.0 | io.aiven.kafka.connect.s3.AivenKafkaConnectS3SinkConnector |
+| gcs-0.13.0 | Sink | 0.13.0 | io.aiven.kafka.connect.gcs.GcsSinkConnector |
## Create a Kafka Connect Cluster
@@ -105,13 +121,29 @@ spec:
Here,
-- `spec.version` - is the name of the KafkaVersion CR. Here, a Kafka of version `3.6.1` will be created.
+- `spec.version` - is the name of the KafkaVersion CR. Here, a ConnectCluster of version `3.6.1` will be created.
- `spec.replicas` - specifies the number of ConnectCluster workers.
- `spec.connectorPlugins` - is the name of the KafkaConnectorVersion CR. Here, mongodb, mysql, postgres, and jdbc connector-plugins will be loaded to the ConnectCluster worker nodes.
- `spec.kafkaRef` specifies the Kafka instance that the ConnectCluster will connect to. Here, the ConnectCluster will connect to the Kafka instance named `kafka-quickstart` in the `demo` namespace.
-- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete Kafka CR. Termination policy `Delete` will delete the database pods, secret when the Kafka CR is deleted.
+- `spec.terminationPolicy` specifies what KubeDB should do when a user try to delete Kafka CR. Termination policy `WipeOut` will delete the database pods, secret when the Kafka CR is deleted.
+
+## N.B:
+1. If replicas are set to 1, the ConnectCluster will run in standalone mode, you can't scale replica after provision the cluster.
+2. If replicas are set to more than 1, the ConnectCluster will run in distributed mode.
+3. If you want to run the ConnectCluster in distributed mode with 1 replica, you must set the `CONNECT_CLUSTER_MODE` environment variable to `distributed` in the pod template.
+```bash
+spec:
+ podTemplate:
+ spec:
+ containers:
+ - name: connect-cluster
+ env:
+ - name: CONNECT_CLUSTER_MODE
+ value: distributed
+```
-Let's create the Kafka CR that is shown above:
+Before create ConnectCluster, you have to deploy a `Kafka` cluster first. To deploy kafka cluster, follow the [Kafka Quickstart](/docs/guides/kafka/quickstart/overview/kafka/index.md) guide. Let's assume `kafka-quickstart` is already deployed using KubeDB.
+Let's create the ConnectCluster CR that is shown above:
```bash
$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/connectcluster/yamls/connectcluster.yaml
@@ -131,7 +163,7 @@ connectcluster-quickstart kafka.kubedb.com/v1alpha1 3.6.1 Ready
```
-Describe the connectcluster object to observe the progress if something goes wrong or the status is not changing for a long period of time:
+Describe the `ConnectCluster` object to observe the progress if something goes wrong or the status is not changing for a long period of time:
```bash
$ kubectl describe connectcluster -n demo connectcluster-quickstart
@@ -334,6 +366,78 @@ secret/connectcluster-quickstart-connect-cred kubernetes.io/basic-auth 2
- `{ConnectCluster-Name}-{alias}-cert` - the certificate secrets which hold `tls.crt`, `tls.key`, and `ca.crt` for configuring the ConnectCluster instance if tls enabled.
- `{ConnectCluster-Name}-config` - the default configuration secret created by the operator.
+### Create Connectors
+
+To create a connector, you can use the Kafka Connect REST API. But, KubeDB operator implements a `Connector` CRD to define the specification of a connector. Create a `Connector` CR to create a connector. Details of the [Connector](/docs/guides/kafka/concepts/connector.md).
+
+At first, we will create config.properties file containing required configuration settings. I am using the `mongodb-source` connector here. You can use any other connector as per your requirement.
+
+```bash
+$ cat config.properties
+
+ connector.class=com.mongodb.kafka.connect.MongoSourceConnector
+ tasks.max=1
+ connection.uri=mongodb://root:Bjjx1fY*l2BeDuZj@mg-rep.demo.svc:27017/
+ topic.prefix=mongo
+ database=mongodb
+ collection=source
+ poll.max.batch.size=1000
+ poll.await.time.ms=5000
+ heartbeat.interval.ms=3000
+ offset.partition.name=mongo-source
+ startup.mode=copy_existing
+ publish.full.document.only=true
+ key.ignore=true
+ value.converter=org.apache.kafka.connect.json.JsonConverter
+ value.converter.schemas.enable=false
+```
+
+Now, we will create secret containing config.properties file.
+
+```bash
+$ kubectl create secret generic mongodb-source-config --from-file=./config.properties -n demo
+```
+
+Now, we will use this secret to create a `Connector` CR.
+
+```yaml
+apiVersion: kafka.kubedb.com/v1alpha1
+kind: Connector
+metadata:
+ name: mongodb-source-connector
+ namespace: demo
+spec:
+ configSecret:
+ name: mongodb-source-config
+ connectClusterRef:
+ name: connectcluster-quickstart
+ namespace: demo
+ terminationPolicy: WipeOut
+```
+
+Here,
+
+- `spec.configSecret` - is the name of the secret containing the connector configuration.
+- `spec.connectClusterRef` - is the name of the ConnectCluster instance that the connector will run on. This is an appbinding reference of the ConnectCluster instance.
+- `spec.terminationPolicy` - specifies what KubeDB should do when a user try to delete Connector CR. Termination policy `WipeOut` will delete the connector from the ConnectCluster when the Connector CR is deleted. If you want to keep the connector after deleting the Connector CR, you can set the termination policy to `Delete`.
+
+Now, create the `Connector` CR that is shown above:
+
+```bash
+$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml
+connector.kafka.kubedb.com/mongodb-source-connector created
+```
+
+```bash
+$ kubectl get connector -n demo -w
+
+NAME TYPE CONNECTCLUSTER STATUS AGE
+mongodb-source-connector kafka.kubedb.com/v1alpha1 connectcluster-quickstart Pending 0s
+mongodb-source-connector kafka.kubedb.com/v1alpha1 connectcluster-quickstart Pending 0s
+.
+.
+mongodb-source-connector kafka.kubedb.com/v1alpha1 connectcluster-quickstart Running 1s
+```
## Cleaning up
@@ -354,7 +458,7 @@ namespace "demo" deleted
If you are just testing some basic functionalities, you might want to avoid additional hassles due to some safety features that are great for the production environment. You can follow these tips to avoid them.
-1 **Use `terminationPolicy: Delete`**. It is nice to be able to resume the cluster from the previous one. So, we preserve auth `Secrets`. If you don't want to resume the cluster, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resource that was created with the ConnectCluster CR. For more details, please visit [here](/docs/guides/kafka/concepts/kafka.md#specterminationpolicy).
+1 **Use `terminationPolicy: Delete`**. It is nice to be able to resume the cluster from the previous one. So, we preserve auth `Secrets`. If you don't want to resume the cluster, you can just use `spec.terminationPolicy: WipeOut`. It will clean up every resource that was created with the ConnectCluster CR. For more details, please visit [here](/docs/guides/kafka/concepts/connectcluster.md#specterminationpolicy).
## Next Steps
diff --git a/docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml b/docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml
new file mode 100644
index 0000000000..37933b4fee
--- /dev/null
+++ b/docs/guides/kafka/quickstart/overview/connectcluster/yamls/mongodb-source-connector.yaml
@@ -0,0 +1,12 @@
+apiVersion: kafka.kubedb.com/v1alpha1
+kind: Connector
+metadata:
+ name: mongodb-source-connector
+ namespace: demo
+spec:
+ configSecret:
+ name: mongodb-source-config
+ connectClusterRef:
+ name: connectcluster-quickstart
+ namespace: demo
+ terminationPolicy: WipeOut
\ No newline at end of file
diff --git a/docs/guides/kafka/quickstart/overview/kafka/index.md b/docs/guides/kafka/quickstart/overview/kafka/index.md
index b0e0a325ca..5ce972871e 100644
--- a/docs/guides/kafka/quickstart/overview/kafka/index.md
+++ b/docs/guides/kafka/quickstart/overview/kafka/index.md
@@ -39,7 +39,7 @@ demo Active 9s
> Note: YAML files used in this tutorial are stored in [guides/kafka/quickstart/overview/yamls](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/guides/kafka/quickstart/overview/yamls) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs).
-> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/overview/index.md#tips-for-testing).
+> We have designed this tutorial to demonstrate a production setup of KubeDB managed Apache Kafka. If you just want to try out KubeDB, you can bypass some safety features following the tips [here](/docs/guides/kafka/quickstart/overview/kafka/index.md#tips-for-testing).
## Find Available StorageClass
@@ -58,6 +58,8 @@ Here, we have `standard` StorageClass in our cluster from [Local Path Provisione
When you install the KubeDB operator, it registers a CRD named [KafkaVersion](/docs/guides/kafka/concepts/kafkaversion.md). The installation process comes with a set of tested KafkaVersion objects. Let's check available KafkaVersions by,
```bash
+$ kubectl get kfversion
+
NAME VERSION DB_IMAGE DEPRECATED AGE
3.3.2 3.3.2 ghcr.io/appscode-images/kafka-kraft:3.3.2 26h
3.4.1 3.4.1 ghcr.io/appscode-images/kafka-kraft:3.4.1 26h
@@ -407,7 +409,7 @@ If you are just testing some basic functionalities, you might want to avoid addi
## Next Steps
-- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/index.md) with KubeDB Operator.
+- [Quickstart Kafka](/docs/guides/kafka/quickstart/overview/kafka/index.md) with KubeDB Operator.
- Kafka Clustering supported by KubeDB
- [Combined Clustering](/docs/guides/kafka/clustering/combined-cluster/index.md)
- [Topology Clustering](/docs/guides/kafka/clustering/topology-cluster/index.md)