• Explore
  • About Us
  • Log In
  • Get Started
  • Explore
  • About Us
  • Log In
  • Get Started

Setting up a Kubernetes Cluster with Kubeadm

Scenario

As a Kubernetes administrator, you need to set up a new, highly available Kubernetes cluster from scratch using kubeadm. This cluster will host critical applications, so a proper and robust setup is essential. You will configure two nodes: one control plane node and one worker node.

Requirements

  1. Prepare the Nodes:

    • Ensure you have at least two machines (VMs or physical) running a compatible Linux distribution (e.g., Ubuntu 22.04, CentOS 7/8).
      • For local VMs (MacBook M-series): You can use free virtualization software like VirtualBox (requires ARM-based Linux guests like Ubuntu for ARM), UTM, or Multipass to create ARM-based Linux virtual machines. These tools provide excellent performance for running Linux VMs on M-series chips.
      • Alternatively, use Cloud VMs: If local virtualization is not feasible or preferred, you can provision two virtual machines on any cloud provider (e.g., AWS EC2, Google Cloud Compute Engine, Azure VMs) running a compatible Linux distribution.
    • Each node must have at least 2GB of RAM and 2 CPUs.
    • Disable swap on all nodes.
    • Install a container runtime (e.g., containerd) on all nodes.
    • Install kubelet, kubeadm, and kubectl on all nodes.
  2. Initialize the Control Plane:

    • Initialize the Kubernetes control plane on the designated control plane node using kubeadm.
    • Ensure the Pod Network Add-on (e.g., Calico or Flannel) is installed.
  3. Join the Worker Node:

    • Join the worker node to the cluster using the kubeadm join command generated during control plane initialization.
  4. Verify Cluster Health:

    • Confirm that all nodes are in the Ready state.
    • Verify that all core Kubernetes components (pods in kube-system namespace) are running.

Acceptance Criteria:

  • A functional Kubernetes cluster with one control plane node and one worker node is established.
  • kubectl get nodes shows both nodes in Ready status.
  • kubectl get pods -n kube-system shows all system pods running.
  • You can deploy a simple application (e.g., Nginx) and access it.

Resources

  • Installing kubeadm
  • Creating a cluster with kubeadm
  • Container Runtimes
  • Calico Installation Guide

Possible Ways to Implement

  • Disable Swap: Use sudo swapoff -a and remove the swap entry from /etc/fstab.
  • Install Containerd: Follow the official Docker/containerd documentation for your OS.
  • Install Kubeadm, Kubelet, Kubectl: Add Kubernetes apt/yum repositories and install the packages.
  • Kubeadm Init: Use sudo kubeadm init with appropriate flags (e.g., --pod-network-cidr).
  • Pod Network Add-on: Apply the YAML manifest for your chosen CNI (e.g., kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml).
  • Kubeadm Join: Execute the command provided by kubeadm init on the worker node.

Real-World Significance

Setting up a Kubernetes cluster is the foundational task for any administrator. Mastering kubeadm provides a deep understanding of how Kubernetes components interact and how a cluster is bootstrapped. This knowledge is critical for troubleshooting, maintaining, and scaling production Kubernetes environments. It directly prepares you for the CKA exam's focus on cluster architecture and installation, enabling you to build and manage robust Kubernetes infrastructure.

    CKA Practice Exercises

    Unlock All Exercises

  • Cluster Architecture, Installation & Configuration
    • Setting up a Kubernetes Cluster with Kubeadm
    • Managing Cluster Certificates
    • Upgrading a Kubernetes Cluster
    • Implementing RBAC for Users and ServiceAccounts
    • Configuring Kubeconfig Files
    • Using Helm to Deploy Applications
    • Managing Kubernetes Manifests with Kustomize
    • Understanding CNI, CSI, CRI
    • Managing etcd Backups and Restores
    • API Server Authentication and Authorization Basics
  • Workloads & Scheduling
    • Deploying Applications with Deployments
    • Performing Rolling Updates and Rollbacks
    • Configuring ConfigMaps and Secrets
    • Implementing Horizontal Pod Autoscaling
    • Managing Pod Scheduling with Taints and Tolerations
    • Controlling Pod Placement with Node Selectors and Affinity
    • Configuring Pod Security Context
  • Services & Networking
    • Creating ClusterIP, NodePort, and LoadBalancer Services
    • Configuring Ingress with Gateway API
    • Understanding CoreDNS and DNS Resolution
    • Implementing Network Policies for Pod Isolation
    • Troubleshooting Network Connectivity
  • Storage
    • Creating Persistent Volumes and Claims
    • Implementing Storage Classes and Dynamic Provisioning
    • Configuring Volume Access Modes
    • Using Local Persistent Volumes
  • Troubleshooting
    • Troubleshooting Pod Startup Issues
    • Debugging Application Logs
    • Troubleshooting Node Issues
    • Debugging Service Connectivity
    • Troubleshooting Network Policy Issues
    • Diagnosing Control Plane Component Failures
    • Troubleshooting Storage Issues
    • Monitoring Resource Usage
    • Inspecting etcd with etcdctl
    • Checking Control Plane Component Logs
  • Imperative Kubectl Practice
    • Create a Pod with Image and Label
    • Expose Deployment as Service
    • Scale Deployment Imperatively
    • Update Image Imperatively