Keda Autoscaling for kafka consumers

Roshan Khatri
Intro kubernetes event driven autoscaler KEDA, which stands for Kubernetes-based Event-Driven Autoscaler, is an open-source project that provides a way to automatically scale Kubernetes deployments based on the number of incoming events or messages from a message queue or streaming platform. This autoscaler can be used with any Kubernetes cluster and supports a wide range of event sources, such as Azure Event Hubs, Apache Kafka, RabbitMQ, and many others. KEDA is designed to be highly customizable and flexible, with support for a variety of scaling triggers, such as CPU usage, memory usage, custom metrics, and external event sources.

Kafka Producer Consumer

Roshan Khatri
In the earlier blog article we created a strimzi kafka cluster on kubernetes. We are going to write kafka producers and consumer on this part of the blog. All code used in this blog has been shared on github. Producers send messages to kafka topics and consumers consume messages from the kafka topic. For the sake of ease we have used python for writing kafka producer and consumers. kafka producer uses car_id as the partition key.

Strimzi Kafka

Roshan Khatri
Strimzi is an open-source project that provides a set of resources for running Apache Kafka on Kubernetes. It offers various features such as automatic creation and management of Apache Kafka clusters, providing a convenient way to run Apache Kafka on a cloud-native platform, and improving the resilience and scalability of Apache Kafka clusters. kuberentes aces on stateless apps. Stateful apps are applications that require persistent storage and are used to store data even after the application or container restarts.
Commands and Arguments

Commands and Arguments

Roshan Khatri
Containers are meant to run a specific process. Once the task is complete the container exits. CMD within Dockerfile defines the default command that runs inside the container once it starts. command overrides the entrypoint used in Dockerfile. args override the cmd field in the Dockerfile. spec: containers: - name: ubuntu-sleeper image: ubuntu-sleeper command: [ "sleep" ] args: [ "infinite" ] The inclusion of command and args in a pod in deployment takes the form of the following screenshot
Getting help with YAMLs

Getting help with YAMLs

Roshan Khatri
Kubernetes built in kubectl explain command can be used as a helper tool when in confusion with YAMLs relating to kubernetes object. kubectl explain pod A level deep navigation into the pod would be something like kubectl explain pod.spec One move level deep navigation would be like kuebctl explain pod.spec.affinity The output of which looks like Another super useful command to list all available values one level below of affinity can be accomplished using
Kubernetes Resource requirements

Kubernetes Resource requirements

Roshan Khatri
Resource Requirements Resource Requirements limits CPU and Memory containers can use on kubernetes cluster. These are added as requests and limits on the containers. Requests are resources that can be requested by containers. Limits are the hard limits which the resources will be allocated to the container by the cluster. LimitRange can be used to specify the default limits on the cluster. Resourcequota are used to limit the resources on a namespace level.
Kubernetes Basics

Kubernetes Basics

Roshan Khatri
Writing kubernetes manifests is irritating and not practical all times. one of the best ways to generate the manifest is to generate kubernetes components. Some commands like kubectl expose deployment work only when deployment is present on the kubernetes cluster. Its wise to stick with kubectl create service command to generate service related manifests. Other available generators can be accessed via kubectl create -h command which lists possible examples and help realted to generation of manifests.
Mysql kubernetes Deployment

Mysql kubernetes Deployment

Roshan Khatri
Mysql is one of the most used backend server for data persistence. It can be deployed on kubernetes as a deployment or a StatefulSet. Deploying as Deployment and StatefulSet have pros and cons of the own. Usually Deployment is the way to g for development usecases. Scaleup and Scaledown is not generally supported or recommended. Mysql doesn’t support horizontal scaling without complicated mechanisms. Stateful Sets should be the way to go for production use cases.