A year back, I got started with Dockers Containers. 2-3 months down the road and I realized that thankfully I learnt a revolutionary technology because of some of the following reasons:
- Quick Learning: Docker containers were used to spin up my exploratory learning environment, on-demand demand, for any tool/ framework/Programming language I wanted to learn
- On-demand Self-service Environments: Docker containers were used to setup on-demand self-service dev/test environments. This was a huge productivity booster for developers and test engineers.
- Automated Deployments: With Docker containers, Jenkins, Repository etc, I saw the automated deployments getting created in dev/testing/UAT servers.
Following are some of the areas where I felt the need for some tool to do following:
- Manage App Cluster: Scale the apps in multiple Docker containers to meet app requirements. Take for an instance this example. Let’s say there is a data pipeline composed of Flume, Kafka and Spark containers. The need is to scale the pipeline to process larger dataset that could be achieved by having multiple containers for Flume, Kafka and Spark. In other words, use cluster setup to start these apps to process larger datasets. Say, for instance, Flume cluster passing data to Kafka cluster.
- Container orchestration: Manage the containers in terms of starting/stopping multiple containers running an app to perform on-demand service requirements. Take for an instance, starting/stopping Jenkins cluster to do CI jobs as required.
- Components Repackaging: Sometimes, I felt like repackaging existing apps and starting them together to test different app configurations.
The requirements such as above could easily be fulfilled by Kubernetes.
As I deep dived in the world of Kubernetes, I realized that this is one of the coolest tool I came across in recent past which for sure would act as feather in the cap for DevOps professionals working with Docker containers.
Following are some of the key building blocks of Kubernetes which simplified the way I setup multiple containers together and able to maintain a specific number of replicas at any point of time while exposing those containers as a service.
- Pods: Pods can be termed as a set of one or more co-located and co-managed containers sharing same namespace and same volume. Each pod tends to have an IP address associated with it which can be used to access the app running within that pod. Pods can be used to co-locate and co-managed multiple Docker containers having shared volume.
- Services: Services provide a higher level abstraction to pods. If one or more pods have to rely on other pods, this is done via “service” level abstraction. Imagine a Kafka cluster exposed via Kubernetes service level abstraction.
- Replication Controllers: Replication controllers maintains same number of replicas of pods at any point of time. That essentially means that if one or more pods got terminated due to any reasons, the controller starts the same number of pods appropriately.
With the emerging trends of cloud-native apps where containers and microservices form the key components, Kubernetes has been identified as the most critical component to take the cloud-native apps management to a different level altogether. As a matter of fact, CNCF.io has also recognized Kubernetes as the first tool that serves the Cloud-native apps requirements. And, with Docker containers technology becoming the most popular containerization technology at this point of time, their marriage is only going to make both of them stronger than any other cloud-native configuration requiring container and container orchestration tool to work in tandem.
- Sklearn Machine Learning Pipeline – Python Example - August 13, 2020
- Imputing Missing Data using Sklearn SimpleImputer - August 11, 2020
- When to use LabelEncoder – Python Example - August 10, 2020