• Explore
  • About Us
  • Log In
  • Get Started
  • Explore
  • About Us
  • Log In
  • Get Started
15:00

Deploying Applications with Deployments

Scenario

As a Kubernetes Administrator for a rapidly growing tech company, you've been tasked with deploying a new stateless web application. The development team has provided a container image and requires that the application be deployed in a way that ensures high availability and can be easily scaled in the future. Your job is to use a Kubernetes Deployment to create a robust and reliable setup for this application.

Requirements

  1. Create the Deployment:

    • Create a new Deployment named web-frontend.
    • The Deployment must ensure that 3 replicas of the application Pod are running at all times.
    • The Pods must be created from the nginx:1.25-alpine container image.
    • Each container should have a CPU resource request of 200m (this will be important for later exercises).
  2. Verify the Deployment:

    • Confirm that the Deployment has successfully created and is managing 3 running and available Pods.
    • Inspect the labels of the Pods to ensure the Deployment's selector will correctly manage them.
  3. Expose and Access the Deployment:

    • Create a Service named web-frontend-svc to provide a stable endpoint for the web-frontend Deployment.
    • The Service must be of type NodePort to be accessible from outside the cluster for testing purposes.
    • The Service should listen on port 80 and forward traffic to the Pods' port 80.

Acceptance Criteria

  • A Deployment object named web-frontend exists in the default namespace.
  • The Deployment's status shows 3/3 replicas as READY.
  • A Service named web-frontend-svc exists and is of type NodePort.
  • The Service correctly selects the Pods managed by the web-frontend Deployment.
  • You can successfully retrieve the Nginx welcome page by sending an HTTP request to any cluster node's IP address on the assigned NodePort.

Resources

  • Official Kubernetes Documentation: Deployments
  • Official Kubernetes Documentation: Services
  • kubectl create command reference

Possible Ways to Implement

Note: In the commands below, text in angle brackets (like <deployment-name>) represents placeholders that must be replaced with actual values. For this exercise, use the specific names required: web-frontend for the deployment and web-frontend-svc for the service.

Here are some hints and common approaches to solve this exercise:

  • For Deployment Creation:

    • Use the kubectl create deployment command with --image and --replicas flags for a quick imperative approach.
    • Alternatively, create a YAML file (e.g., deployment.yaml) defining the Deployment object with apiVersion, kind, metadata, spec (including replicas, selector, and template with container image and ports). Then apply it using kubectl apply -f deployment.yaml.
  • For Verification:

    • Use kubectl get deployment <deployment-name> to check the deployment status.
    • Use kubectl get pods -l app=<label-value> to list pods managed by the deployment.
    • Use kubectl describe deployment <deployment-name> for detailed information, including events and pod templates.
  • For Service Exposure:

    • Use the kubectl expose deployment web-frontend command, specifying --type=NodePort, --port, and --target-port. NodePort is used here for external access during testing; production environments typically use LoadBalancer or Ingress for better security and functionality.
    • Alternatively, define a Service object in a YAML file (e.g., service.yaml or within the same deployment.yaml separated by ---) with apiVersion, kind, metadata, spec (including type, selector, and ports). Then apply it using kubectl apply -f service.yaml.
  • For Access Verification:

    • Find the NodePort assigned to your service using kubectl get service <service-name>.
    • Get the IP address of one of your cluster nodes (e.g., kubectl get nodes -o wide).
    • Use curl http://<node-ip>:<node-port> from your local machine or another pod in the cluster to test connectivity.

Real-World Significance

Deployments are the workhorse for managing stateless applications in Kubernetes. Mastering them is non-negotiable for a CKA. They provide the self-healing, scaling, and update mechanisms that make Kubernetes so powerful. While imperative commands are great for quick tasks, the declarative approach using YAML is the industry standard. It embodies the principle of Infrastructure as Code (IaC), enabling you to create auditable, version-controlled, and automated workflows for managing your application's lifecycle in a reliable and repeatable way.

Production Considerations: In real-world deployments, always include resource requests and limits to ensure proper scheduling and prevent resource contention. The CPU requests we've added here are essential for features like Horizontal Pod Autoscaling (HPA) and help the scheduler make informed placement decisions. Also, use current, supported container image versions and consider using specific tags rather than latest for reproducible deployments.

    CKA Practice Exercises

    Unlock All Exercises

  • Cluster Architecture, Installation & Configuration
    • Setting up a Kubernetes Cluster with Kubeadm
    • Managing Cluster Certificates
    • Upgrading a Kubernetes Cluster
    • Implementing RBAC for Users and ServiceAccounts
    • Configuring Kubeconfig Files
    • Using Helm to Deploy Applications
    • Managing Kubernetes Manifests with Kustomize
    • Understanding CNI, CSI, CRI
    • Managing etcd Backups and Restores
    • API Server Authentication and Authorization Basics
  • Workloads & Scheduling
    • Deploying Applications with Deployments
    • Performing Rolling Updates and Rollbacks
    • Configuring ConfigMaps and Secrets
    • Implementing Horizontal Pod Autoscaling
    • Managing Pod Scheduling with Taints and Tolerations
    • Controlling Pod Placement with Node Selectors and Affinity
    • Configuring Pod Security Context
  • Services & Networking
    • Creating ClusterIP, NodePort, and LoadBalancer Services
    • Configuring Ingress with Gateway API
    • Understanding CoreDNS and DNS Resolution
    • Implementing Network Policies for Pod Isolation
    • Troubleshooting Network Connectivity
  • Storage
    • Creating Persistent Volumes and Claims
    • Implementing Storage Classes and Dynamic Provisioning
    • Configuring Volume Access Modes
    • Using Local Persistent Volumes
  • Troubleshooting
    • Troubleshooting Pod Startup Issues
    • Debugging Application Logs
    • Troubleshooting Node Issues
    • Debugging Service Connectivity
    • Troubleshooting Network Policy Issues
    • Diagnosing Control Plane Component Failures
    • Troubleshooting Storage Issues
    • Monitoring Resource Usage
    • Inspecting etcd with etcdctl
    • Checking Control Plane Component Logs
  • Imperative Kubectl Practice
    • Create a Pod with Image and Label
    • Expose Deployment as Service
    • Scale Deployment Imperatively
    • Update Image Imperatively