As a Kubernetes Administrator for a rapidly growing tech company, you've been tasked with deploying a new stateless web application. The development team has provided a container image and requires that the application be deployed in a way that ensures high availability and can be easily scaled in the future. Your job is to use a Kubernetes Deployment to create a robust and reliable setup for this application.
Create the Deployment:
web-frontend.nginx:1.25-alpine container image.200m (this will be important for later exercises).Verify the Deployment:
Expose and Access the Deployment:
web-frontend-svc to provide a stable endpoint for the web-frontend Deployment.NodePort to be accessible from outside the cluster for testing purposes.web-frontend exists in the default namespace.READY.web-frontend-svc exists and is of type NodePort.web-frontend Deployment.Note: In the commands below, text in angle brackets (like <deployment-name>) represents placeholders that must be replaced with actual values. For this exercise, use the specific names required: web-frontend for the deployment and web-frontend-svc for the service.
Here are some hints and common approaches to solve this exercise:
For Deployment Creation:
kubectl create deployment command with --image and --replicas flags for a quick imperative approach.deployment.yaml) defining the Deployment object with apiVersion, kind, metadata, spec (including replicas, selector, and template with container image and ports). Then apply it using kubectl apply -f deployment.yaml.For Verification:
kubectl get deployment <deployment-name> to check the deployment status.kubectl get pods -l app=<label-value> to list pods managed by the deployment.kubectl describe deployment <deployment-name> for detailed information, including events and pod templates.For Service Exposure:
kubectl expose deployment web-frontend command, specifying --type=NodePort, --port, and --target-port. NodePort is used here for external access during testing; production environments typically use LoadBalancer or Ingress for better security and functionality.service.yaml or within the same deployment.yaml separated by ---) with apiVersion, kind, metadata, spec (including type, selector, and ports). Then apply it using kubectl apply -f service.yaml.For Access Verification:
kubectl get service <service-name>.kubectl get nodes -o wide).curl http://<node-ip>:<node-port> from your local machine or another pod in the cluster to test connectivity.Deployments are the workhorse for managing stateless applications in Kubernetes. Mastering them is non-negotiable for a CKA. They provide the self-healing, scaling, and update mechanisms that make Kubernetes so powerful. While imperative commands are great for quick tasks, the declarative approach using YAML is the industry standard. It embodies the principle of Infrastructure as Code (IaC), enabling you to create auditable, version-controlled, and automated workflows for managing your application's lifecycle in a reliable and repeatable way.
Production Considerations: In real-world deployments, always include resource requests and limits to ensure proper scheduling and prevent resource contention. The CPU requests we've added here are essential for features like Horizontal Pod Autoscaling (HPA) and help the scheduler make informed placement decisions. Also, use current, supported container image versions and consider using specific tags rather than latest for reproducible deployments.