• Explore
  • About Us
  • Log In
  • Get Started
  • Explore
  • About Us
  • Log In
  • Get Started
16:00

Creating ClusterIP, NodePort, and LoadBalancer Services

Scenario

A development team needs to deploy a new multi-tier application in the cluster. The application consists of a public-facing web frontend and a backend database service. As the Kubernetes administrator, you are responsible for deploying these components and setting up the networking to allow them to communicate correctly and to expose the frontend to the internet.

  • The database should only be accessible from within the cluster.
  • The frontend needs to be accessible from the internet for end-users.
  • For testing purposes, you also need to expose the frontend on a specific port on each node.

To accomplish this, you will create a ClusterIP service for the database, and both a NodePort and a LoadBalancer service for the frontend.

Requirements

  1. Create the Backend Database Deployment:

    • Create a Deployment named database-deployment.
    • Use the postgres:16-alpine image. (Note: You can verify this is current at https://hub.docker.com/_/postgres, but for this exercise, use the specified version for consistency.)
    • Ensure it has 1 replica.
    • Add the label app: database.
  2. Expose the Database Internally:

    • Create a ClusterIP Service named database-svc.
    • It must expose the database-deployment on port 5432.
    • This service should only be accessible from within the cluster.
  3. Create the Frontend Web Application Deployment:

    • Create a Deployment named frontend-deployment.
    • Use the nginx:1.25-alpine image.
    • Ensure it has 2 replicas.
    • Add the label app: frontend.
  4. Expose the Frontend for Testing (NodePort):

    • Create a NodePort Service named frontend-nodeport-svc.
    • It must expose the frontend-deployment on port 80.
    • The service must be accessible on port 30007 on each node in the cluster.
    • Note: NodePort services use ports in the range 30000-32767 by default. Ensure the chosen port is not already in use.
  5. Expose the Frontend to the Internet (LoadBalancer):

    • Create a LoadBalancer Service named frontend-lb-svc.
    • It must expose the frontend-deployment on port 80.
    • This service should be accessible from the internet via an external IP address provided by a cloud provider.
    • Note: In local clusters (minikube, kind), the LoadBalancer will remain in "Pending" state unless you have a local load balancer implementation like MetalLB.

Acceptance Criteria

  • The database-deployment and frontend-deployment are running with the correct number of replicas.
  • The database-svc is a ClusterIP service and correctly selects the database pod.
  • The frontend-nodeport-svc is a NodePort service, is accessible on port 30007 on the nodes, and correctly selects the frontend pods.
  • The frontend-lb-svc is a LoadBalancer service, receives an external IP (in a cloud environment), and correctly selects the frontend pods.
  • The frontend pods can connect to the database using the DNS name database-svc (verified by DNS resolution and network connectivity tests).

Possible Ways to Implement

  • For Creating Deployments:

    • Use kubectl create deployment with --image and --replicas. Add labels afterward using kubectl label deployment.
    • Alternatively, define the Deployment in a YAML file and apply it with kubectl apply -f.
  • For Creating Services:

    • Use kubectl expose deployment with the --type flag (ClusterIP, NodePort, or LoadBalancer).
    • To specify a nodePort, you may need to create the service declaratively using a YAML file.
    • For ClusterIP, you can omit the --type flag as it's the default.
  • For Verification:

    • Use kubectl get deployments,services,pods -o wide to get a comprehensive overview.
    • Use kubectl describe service <service-name> to inspect the service's selector and endpoints.
    • To test connectivity, you can use kubectl exec into a pod and use curl or another tool to access the service DNS name.

Real-World Significance

Understanding Kubernetes Service types is fundamental to controlling network traffic in your cluster.

  • ClusterIP is the workhorse for internal, service-to-service communication, forming the backbone of a microservices architecture.
  • NodePort is an essential tool for development, testing, or exposing services in on-premise environments where a cloud load balancer isn't available.
  • LoadBalancer is the standard, production-grade method for exposing applications to the internet, providing a stable, publicly accessible endpoint that distributes traffic across your pods.

A CKA must be able to choose the correct Service type for a given scenario and configure it correctly to ensure application availability, security, and scalability.

    CKA Practice Exercises

    Unlock All Exercises

  • Cluster Architecture, Installation & Configuration
    • Setting up a Kubernetes Cluster with Kubeadm
    • Managing Cluster Certificates
    • Upgrading a Kubernetes Cluster
    • Implementing RBAC for Users and ServiceAccounts
    • Configuring Kubeconfig Files
    • Using Helm to Deploy Applications
    • Managing Kubernetes Manifests with Kustomize
    • Understanding CNI, CSI, CRI
    • Managing etcd Backups and Restores
    • API Server Authentication and Authorization Basics
  • Workloads & Scheduling
    • Deploying Applications with Deployments
    • Performing Rolling Updates and Rollbacks
    • Configuring ConfigMaps and Secrets
    • Implementing Horizontal Pod Autoscaling
    • Managing Pod Scheduling with Taints and Tolerations
    • Controlling Pod Placement with Node Selectors and Affinity
    • Configuring Pod Security Context
  • Services & Networking
    • Creating ClusterIP, NodePort, and LoadBalancer Services
    • Configuring Ingress with Gateway API
    • Understanding CoreDNS and DNS Resolution
    • Implementing Network Policies for Pod Isolation
    • Troubleshooting Network Connectivity
  • Storage
    • Creating Persistent Volumes and Claims
    • Implementing Storage Classes and Dynamic Provisioning
    • Configuring Volume Access Modes
    • Using Local Persistent Volumes
  • Troubleshooting
    • Troubleshooting Pod Startup Issues
    • Debugging Application Logs
    • Troubleshooting Node Issues
    • Debugging Service Connectivity
    • Troubleshooting Network Policy Issues
    • Diagnosing Control Plane Component Failures
    • Troubleshooting Storage Issues
    • Monitoring Resource Usage
    • Inspecting etcd with etcdctl
    • Checking Control Plane Component Logs
  • Imperative Kubectl Practice
    • Create a Pod with Image and Label
    • Expose Deployment as Service
    • Scale Deployment Imperatively
    • Update Image Imperatively