Last month, I was tempted to develop my own QR Code Scanner using ReactJS web application which runs on HTML5 supported browsers. So far, I was not able to find any suitable components / samples in public repository for me to us.
So let me share what I have done in this article. For those of you who like to get the code, help yourself to fork my code https://github.com/Cyder-SG/reactjs-qrcode-scanner.
Basically there are 2 parts of the solution:
The following illustrates the solution flow, which I explain…
I had a K8S cluster initially created through Rancher GUI. It was nicely easy creating new custom K8S cluster using Rancher GUI. Recently I hit an issue with the Rancher GUI itself that made me creating new Rancher node, with empty database. That’s something which no-one wants to experience, trust me, it is real nightmare.
Well, I managed to build new Rancher node with GUI and now it was empty.
What about my existing K8S clusters? Here are few issues that I had:
We have Kafka services deployed as StatefulSet in our K8S cluster, and we need to expose our Kafka Services to external client, as consumer and producer of messages.
Our Kafka services are powered by three (3) brokers and three (3) zookeepers.
We would like to expose our Kafka brokers to external clients using Load Balancer.
Kafka consists of Records, Topics, Consumers, Producers, Brokers, Logs, Partitions, and Clusters. A Kafka Topic is a stream of records. A topic has a Log which is the topic’s storage on disk. A Topic Log is broken up into partitions and segments. The Kafka Producer…
This is the sixth article in our Running Kafka in Kubernetes publication. Refer to https://medium.com/kafka-in-kubernetes/accessing-kafka-broker-87aa7928a6e9 to see the previous article.
Failover is important in building a distributed system. Our Kafka workload contains 3 replicas and our Kubernetes runs on 2 nodes. In this article, you will see how we perform failover testing.
Let’s add replica of Kafka to 2
This is a fifth article in our Running Kafka in Kubernetes publication. Refer to https://medium.com/kafka-in-kubernetes/deploying-kafka-broker-cluster-5ba2790fdb5b to see the previous article.
Without testing, we will never be sure if our Kafka setup is working. In this article, you will see how we perform the Kafka produce and consume from within the cluster and also from external.
This section is about accessing the Kafka Topics from a Pod inside the same K8S cluster.
To access the Kafka broker from internal, we created a Kafka-Cli docker image.
This is a forth article in our Running Kafka in Kubernetes publication. Refer to https://medium.com/kafka-in-kubernetes/deploying-zookeeper-cluster-3acdcc7ed340 to see the previous article.
Deploying Kafka in Kubernetes is challenge, especially when we need to achieve multi-brokers and separation of connection between internal and external applications.
As of when this article is published, there is no official open-source Kafka docker published by Apache Kafka community. There is one published by Confluent.io, but let’s not use that, for this exercise and demo purpose.
We created our own Docker image. You can refer to the Dockerfile here
The main logic of starting the Kafka broker…
This is a third article in our Running Kafka in Kubernetes publication. Refer to https://medium.com/kafka-in-kubernetes/automating-storage-provisioning-41034e570928 to see the previous article.
Kafka uses Zookeeper to manage service discovery for Kafka Brokers that form the cluster. Zookeeper sends changes of the topology to Kafka, so each node in the cluster knows when a new broker joined, a Broker died, a topic was removed or a topic was added, etc.. In short, Zookeeper is the manager of Kafka operation.
Zookeeper is deployed using YAML. You can refer to this https://raw.githubusercontent.com/fernandocyder/k8s-practice/master/04.kafka-expose-service/01.zookeeper.yaml on how we deploy the Zookeeper.
There are few key points that make…
This is a second article in our Running Kafka in Kubernetes publication. Refer to https://medium.com/kafka-in-kubernetes/provision-the-k8s-cluster-5b220a8ff3d3 to see the previous article.
To achieve an automation of the storage provisioning in Kubernetes, we deploy our application using StatefulSet, and paired with Volume Claim Template. The backbone of Volume Claim Template is Storage Class.
A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators.
In https://kubernetes.io/docs/concepts/storage/storage-classes/, we see few storage class provisioners. The only issue here is that…
This article is the first one in the Running Kafka in Kubernetes publication. In this article, you will see the how setup a new Rancher cluster with 2 nodes, and configure NFS mount on each of those nodes.
We use EC2 in Oregon region for all our servers.