Deploy on Kubernetes, with Kafka support

Learn how to deploy the Kafkorama Gateway on Kubernetes with built-in support for Apache Kafka, using a local Minikube setup.

This tutorial shows how to deploy the Kafkorama Gateway — with Kafka support, in conjunction with Apache Kafka, using Kubernetes.

Prerequisites

Before deploying Kafkorama Gateway, ensure that you have installed Minikube, a tool for quickly setting up local Kubernetes clusters.

Start Minikube as follows:

minikube start

Check the Kubernetes dashboard as follows:

minikube dashboard

Create Namespace

To organize the Kubernetes resources for Kafkorama, create a dedicated namespace kafkorama as follows.

First, create a manifest file kafkorama-namespace.yaml with the following content:

apiVersion: v1
kind: Namespace
metadata:
  name: kafkorama

Then, apply the manifest using:

kubectl apply -f kafkorama-namespace.yaml

Deploy

Deploy a local Kafka

To enable communication between the Kafkorama Gateway and Apache Kafka, we'll deploy a minimal, single-node Kafka cluster using a the following Kubernetes manifest:

apiVersion: v1
kind: Service
metadata:
  name: kafka-service
  namespace: kafkorama
spec:
  ports:
    - port: 9092
      name: kafka-port
  selector:
    app: kafka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: kafkorama
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
    spec:
      containers:
        - name: kafka
          image: apache/kafka:latest
          ports:
            - containerPort: 9092
          env:
            - name: KAFKA_NODE_ID
              value: "1"
            - name: KAFKA_PROCESS_ROLES
              value: "broker,controller"
            - name: KAFKA_LISTENERS
              value: "PLAINTEXT://:9092,CONTROLLER://:9093"
            - name: KAFKA_ADVERTISED_LISTENERS
              value: "PLAINTEXT://kafka-service.kafkorama.svc.cluster.local:9092"
            - name: KAFKA_CONTROLLER_LISTENER_NAMES
              value: "CONTROLLER"
            - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
              value: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT"
            - name: KAFKA_CONTROLLER_QUORUM_VOTERS
              value: "1@127.0.0.1:9093"
            - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
              value: "1"
            - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
              value: "1"
            - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
              value: "1"
            - name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS
              value: "0"
            - name: KAFKA_NUM_PARTITIONS
              value: "3"
            - name: ALLOW_PLAINTEXT_LISTENER
              value: "yes"
            - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE
              value: "true"
          volumeMounts:
            - name: data
              mountPath: /kafka
      volumes:
        - name: data
          emptyDir: {}

Save this manifest as kafka.yaml and run:

kubectl apply -f kafka.yaml

Deploy a Kafkorama Gateway cluster

The following Kubernetes manifest deploys a single-node Kafkorama Gateway cluster. It also exposes the Gateway to clients and connects it to a local Kafka broker. In later sections, we’ll demonstrate how to scale the cluster manually or enable Kubernetes autoscaling.

---
apiVersion: v1
kind: Service
metadata:
  namespace: kafkorama
  name: kafkorama-cs
  labels:
    app: kafkorama
spec:
  type: LoadBalancer
  ports:
    - name: client-port
      port: 8888
      protocol: TCP
      targetPort: 8800
  selector:
    app: kafkorama
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafkorama
  namespace: kafkorama
spec:
  selector:
    matchLabels:
      app: kafkorama
  replicas: 1 # Desired number of cluster nodes
  template:
    metadata:
      labels:
        app: kafkorama
    spec:
      containers:
        - name: kafkorama-cluster
          imagePullPolicy: Always
          image: kafkorama/kafkorama-gateway:6.0.23
          env:
            - name: KAFKORAMA_GATEWAY_EXTRA_OPTS
              value: "-DMemory=128MB \
                -DLogLevel=INFO \
                -DX.ConnectionOffload=true \
                -Dbootstrap.servers=kafka-service.kafkorama.svc.cluster.local:9092 \
                -Dtopics=vehicles"
            - name: KAFKORAMA_GATEWAY_JAVA_GC_LOG_OPTS
              value: "-XX:+PrintCommandLineFlags -XX:+PrintGC -XX:+PrintGCDetails -XX:+DisableExplicitGC -Dsun.rmi.dgc.client.gcInterval=0x7ffffffffffffff0 -Dsun.rmi.dgc.server.gcInterval=0x7ffffffffffffff0 -verbose:gc"
          resources:
            requests:
              memory: "256Mi"
              cpu: "0.5"
          ports:
            - name: client-port
              containerPort: 8800
            - name: prometheus-port
              containerPort: 9988
          readinessProbe:
            tcpSocket:
              port: 8800
            initialDelaySeconds: 20
            failureThreshold: 5
            periodSeconds: 5
          livenessProbe:
            tcpSocket:
              port: 8800
            initialDelaySeconds: 10
            failureThreshold: 5
            periodSeconds: 5

This manifest includes:

  • A Service to expose Kafkorama Gateway to external clients.
  • A Deployment that runs one instance of Kafkorama Gateway.

We configure Kafkorama Gateway using the KAFKORAMA_GATEWAY_EXTRA_OPTS environment variable, which allows you to override default parameters from the Configuration Guide. In this example, we:

  • Set the Memory limit
  • Configure the log level to INFO
  • Enable connection offloading
  • Connect to the local Kafka broker and consume a specified Kafka topic

To deploy the cluster, save this manifest as kafkorama-cluster.yaml and run:

kubectl apply -f kafkorama-cluster.yaml

Namespace switch

Since the deployment uses the kafkorama namespace, switch to it with the following command:

kubectl config set-context --current --namespace=kafkorama

To switch back to the default namespace, run:

kubectl config set-context --current --namespace=default

Verify installation

Check that the kafkorama and kafka pods are running:

kubectl get pods 

The output should look similar to the following:

NAME                             READY   STATUS    RESTARTS   AGE
kafka-0                          1/1     Running   0          89s
kafkorama-6447f9c7cb-c9s8g       1/1     Running   0          66s

To view the logs of the Kafkorama pod, run:

kubectl logs kafkorama-6447f9c7cb-c9s8g

Test installation

To expose the LoadBalancer Service defined in the manifest above, use the Minikube tunnel command:

minikube tunnel

Then, check if the service is up and accessible:

kubectl get svc

The output should look similar to:

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
kafka-service   ClusterIP      10.43.244.197   <none>         9092/TCP         2m8s
kafkorama-cs    LoadBalancer   10.43.237.196   127.0.0.1      8888:31735/TCP   105s

Locate the EXTERNAL-IP and PORT values for the kafkorama-cs service. In this example, the service is available at http://127.0.0.1:8888.

Open this URL in your browser. You should see a welcome page that includes a demo application under the Debug Console menu, which allows you to publish and consume real-time messages via the Kafkorama cluster.

Scaling

Manual scaling up

In the example above, we deployed a cluster with a single Kafkorama Gateway. You can deploy more Kafkorama Gateway instances in the cluster by modifying the value of the replicas field. For example, to scale up the cluster to three members, run:

kubectl scale deployment kafkorama --replicas=3

Manual scaling down

If the load of your system decreases, you can reduce the number of members in the cluster by modifying the replicas field as follows:

kubectl scale deployment kafkorama --replicas=2

Autoscaling

Manual scaling is practical if the load of your system changes gradually. Otherwise, you can use the autoscaling feature of Kubernetes.

Kubernetes can monitor system load, typically CPU usage, and automatically adjust the size of your Kafkorama cluster by modifying the replicas field.

For example, to scale the cluster up to a maximum of 5 members when CPU usage exceeds 50%, or down to a minimum of 3 members when CPU usage falls below 50%, run:

kubectl autoscale deployment kafkorama \
  --cpu-percent=50 --min=3 --max=5

Alternatively, you can use a YAML manifest:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  namespace: kafkorama
  name: kafkorama-autoscale # you can use any name here
spec:
  maxReplicas: 5
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: kafkorama 
  targetCPUUtilizationPercentage: 50

Save this to a file named kafkorama-autoscale.yaml and apply it with:

kubectl apply -f kafkorama-autoscale.yaml

You can view autoscaler status with:

kubectl get hpa

Note: When testing autoscaling, keep in mind that Kubernetes collects CPU usage data periodically. This means scaling may not happen instantly—this delay is normal.

Uninstall

To remove the Kubernetes resources created for this deployment, run:

kubectl delete -f kafkorama-namespace.yaml

Then, switch back to the default namespace:

kubectl config set-context --current --namespace=default

Build realtime apps

Start by exploring the key concepts of Kafkorama.

Next, choose the appropriate SDK for your programming language or platform to build real-time apps that communicate with your Kafkorama Gateway cluster.

You can also use Kafka's APIs or tools to publish real-time messages to Kafka — these messages will be delivered to connected Kafkorama clients. Similarly, you can consume messages from Kafka that originate from clients connected to Kafkorama.

Finally, to manage Kafka clusters, entitlement, and streaming APIs through a web interface, you can deploy Kafkorama Portal. It provides centralized control for your real-time infrastructure and simplifies operations and access control, and more.

© 2025 MigratoryData. All rights reserved.