Kubernets

What is Kubernetes(K8s)?

Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform used for automating the deployment, scaling, and management of containerized applications. Containers are lightweight, portable, and consistent environments that encapsulate an application and its dependencies. Kubernetes provides a robust framework for managing these containers in a clustered environment.

Key features of Kubernetes include:

  1. Container Orchestration: Kubernetes automates the deployment, scaling, and management of containerized applications, allowing them to run reliably and at scale.
  2. Cluster Management: It organizes containers into clusters, which are groups of nodes that work together. A cluster consists of a master node that manages the cluster and worker nodes where containers run.
  3. Scaling: Kubernetes enables automatic scaling of applications based on demand. It can scale applications up or down by adjusting the number of running instances (replicas) to meet varying workloads.
  4. Load Balancing: It provides built-in load balancing to distribute network traffic across multiple instances of an application, ensuring efficient use of resources and improving application availability.
  5. Self-Healing: Kubernetes can detect and replace failed containers or nodes, ensuring that applications remain available and responsive.
  6. Declarative Configuration: Users can define the desired state of their applications and infrastructure using YAML or JSON configuration files. Kubernetes then works to make the actual state match the desired state.
  7. Service Discovery and Networking: Kubernetes manages communication between containers within the cluster and facilitates service discovery by assigning a unique DNS name to each service.
  8. Rolling Updates and Rollbacks: It supports rolling updates, allowing seamless deployment of new versions of applications without downtime. If issues arise, Kubernetes enables easy rollback to a previous version.

Kubernetes has become a standard for container orchestration in the industry and is widely adopted for deploying and managing containerized applications in various environments, including on-premises data centers, public clouds, and hybrid clouds.

Prerequisites for Kubernetes(K8s)

Before you start with Kubernetes (K8s), it’s essential to ensure that you have certain prerequisites in place. Here are the common prerequisites for setting up and working with Kubernetes:

Container Runtime:

Docker: Kubernetes relies on a container runtime to run containers. Docker is the most commonly used container runtime with Kubernetes. Ensure that Docker is installed and running on your system.

Kubernetes CLI (kubectl):

Install kubectl, the command-line tool for interacting with a Kubernetes cluster. You can download kubectl from the official Kubernetes GitHub releases page: kubectl releases.

Kubernetes Cluster:

Set up a Kubernetes cluster. This can be a local cluster using Minikube, a cluster in the cloud (such as Google Kubernetes Engine, Azure Kubernetes Service, or Amazon EKS), or an on-premises cluster.

Hypervisor (for Minikube):

If you are using Minikube, it requires a hypervisor to run the virtual machine. Common choices include VirtualBox, Hyper-V, or KVM depending on your operating system.

API Access:

Obtain access to a Kubernetes cluster. If you are using a cloud provider, follow their documentation to create a new cluster. If you are setting up your own cluster, tools like kops, kubeadm, or other distributions can help.

Network Configuration:

Ensure that the nodes in your cluster can communicate with each other. Kubernetes uses a network overlay to enable communication between pods in the cluster. Common choices include Calico, Flannel, and Weave.

Firewall Configuration:

If you are running your own cluster, ensure that the necessary ports are open for Kubernetes components to communicate. The default port is 6443 for the Kubernetes API server.

Linux Kernel Modules (for Linux):

On Linux systems, some kernel modules may be required for networking and containerization. Ensure that necessary modules like br_netfilter and overlay are loaded.

Resource Requirements:

Ensure that your system meets the hardware and resource requirements for running a Kubernetes cluster. This includes having enough CPU, memory, and storage resources available.

DNS Resolution:

Ensure that DNS resolution is working correctly in your environment. Kubernetes relies on DNS for service discovery within the cluster.

Time Synchronization:

Ensure that the system clocks on all nodes in the cluster are synchronized. Time skew between nodes can cause issues in a Kubernetes cluster.

Before you begin, carefully review the documentation of the specific tools and technologies you are using, as there may be additional requirements or considerations based on your chosen setup.

Deploying a small application on Kubernetes

Deploying a small application on Kubernetes typically involves several steps, including creating Kubernetes manifests, deploying the application, and verifying its status. Below is a simple example of deploying a small web application using Kubernetes. For the sake of simplicity, we’ll deploy a basic “Hello World” web server using a Docker container.

Step 1: Create a Docker Image

Assuming you have a simple web application code in a directory, create a Dockerfile in that directory with the following content:

# Dockerfile
FROM nginx:alpine
COPY index.html /usr/share/nginx/html

Create an index.html file in the same directory with your desired content.

Build the Docker image:

docker build -t my-small-app:1.0 .

Step 2: Push Docker Image (Optional)

If you’re deploying to a remote Kubernetes cluster, you may need to push the Docker image to a container registry. If you’re using Docker Hub, you can push the image:

docker tag my-small-app:1.0 your-docker-username/my-small-app:1.0
docker push your-docker-username/my-small-app:1.0

Step 3: Create Kubernetes Deployment Manifest

Create a file named deployment.yaml with the following content:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-small-app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-small-app
  template:
    metadata:
      labels:
        app: my-small-app
    spec:
      containers:
      - name: my-small-app
        image: my-small-app:1.0  # or your-docker-username/my-small-app:1.0 if using a registry
        ports:
        - containerPort: 80

Step 4: Create Kubernetes Service Manifest

Create a file named service.yaml with the following content:

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-small-app-service
spec:
  selector:
    app: my-small-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer  # Use NodePort for local testing

Step 5: Deploy to Kubernetes

Apply the deployment and service manifests to your Kubernetes cluster:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Step 6: Access the Application

If using LoadBalancer type, it might take a moment for the external IP to be assigned:

kubectl get svc my-small-app-service --watch

Once you see an external IP, open a web browser and access the application:

http://<EXTERNAL_IP>

Replace <EXTERNAL_IP> with the actual external IP assigned to the service.

That’s it! You’ve deployed a small web application on Kubernetes. This example is minimal, and in a real-world scenario, you might need to consider additional configurations, such as ingress controllers, secrets management, and more depending on your application’s requirements.

One thought on “What is Kubernetes(K8s)?

Comments are closed.