K8S: Deploy an Application


theme: v-green highlight: a11y-dark

Simple Deployment of an Application

# Pull the image from the web to deploy and run the application named mykubia
kubectl run mykubia --image=luksa/kubia --port=9999 --generator=run/v1

–image=luksa/kubia: Specifies the container image to run.
–port=9999: Specifies that the port number the service runs on is 9999.
–generator=run/v1: Adding this flag tells the K8s cluster to create a ReplicationController instead of a Deployment.

$ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
mykubia   1/1     Running   0          63s

Check logs with the command kubectl logs -f mykubia

What Happens Behind the Scenes From Running the Command to the Container Starting?

  1. After building the image, you must first push the image to Docker Hub, which is done by executing docker push luksa/kubia.
  2. The Docker server pushes the image to Docker Hub.
  3. Kubectl receives the command kubectl run mykubia --image=lukia/kubia --port=9999.
  4. Kubectl sends a scheduling request to the REST API server.
  5. After the master node receives the request, it creates a Pod and schedules it to a worker node.
  6. The worker node receives the scheduling notification from the master node.
  7. The worker node uses Kubectl to instruct the local Docker instance to run the image.
  8. Docker on the worker node then pulls the image from Docker Hub and runs it.

Finally, when you run the command kubectl get pods, you can see the running status of your Pod.

How Can External Clients Access the Service Inside a Pod?

View more details with the command: kubectl get pods -o wide

Although every Pod has its own IP address, this IP is only reachable from within the cluster, and cannot be accessed from outside the cluster

Obviously you cannot reach it using curl 172.18.0.6:8080. To allow external access to the service running inside an internal Pod, we can create a service of the special LoadBalancer type.

Create a Service Object

The ClusterIP service type is the default type internal to K8s; by default, it is only reachable from within the cluster and cannot be accessed from outside. Therefore, we need to create a service of type LoadBalancer.

`kubectl expose rc mykubia --type=LoadBalancer --name kubia-http`
# rc mykubia: Tells K8s to use the ReplicationController we created earlier; rc here is the abbreviation for ReplicationController
- # --type=LoadBalancer: After specifying the type as `LoadBalancer`, an external load balancing service will be created. External clients can access the internal `Pod` through this service's `IP` address.

Viewing Services

We can view the service list using kubectl get services

When viewing the service list, you can check the EXTERNAL-IP column. It will be pending at first, and after a short while K8s will assign an external IP to the service.

Relationship Between ReplicationController, Pod, and Service

When we created our Pod earlier, it was not created directly. We created a ReplicationController, then created a Pod instance based on that rc.

To allow external access to the Pod, we need to have K8s expose all Pods managed by the ReplicationController externally through a single service, which is how we get kubia-http.

  • The service has an externally exposed IP, and requests are sent to the service.
  • The service forwards the request to port 9999 on the Pod, and the Pod then handles the request.

ReplicationController

From the example above, you can see that ReplicationController is actually used to replicate Pods to create multiple Pod replicas. A ReplicationController always ensures that one running Pod instance exists.

If the Pod we created above fails and disappears, the ReplicationController will create a new Pod to replace the missing Pod.

The Role of Service

A service like the kubia-http service we created earlier solves a key problem: when a Pod disappears, ReplicationController creates a new one to replace it, and every Pod has its own unique hostname and IP address. Pods can crash and disappear for any number of reasons, and get replaced by new Pods, which means the IP of the Pod changes. How can external clients reliably access our service in this scenario?

This is where service comes in.

  • service solves the problem of constantly changing Pod IP addresses.
  • service can expose multiple Pods externally through a single static IP address and port.

When a service is created, it is assigned a static IP address. This IP address does not change for the entire lifecycle of the service. Clients only need to connect to the service via this fixed IP, and the service will forward the request to one of the internal Pods.

Increasing the Number of Replicas

We currently only have one replica running in our system, and we can scale it up to 3 replicas.

# Check the current number of replicas of the application
kubectl get replicationcontrollers

Scale the number of replicas of mykubia to 3

kubectl scale rc mykubia --replicas=3

This command only tells K8s the desired number of replicas in the system, it does not tell K8s how to perform the operation or how to implement it.

K8s will automatically check whether the current state matches the desired state, and adjust it if they do not match.

Updated System State

After executing the above command, we have scaled the system from 1 replica to 3 replicas. This demonstrates how easily K8s enables horizontal scaling. When we need to add more replicas, we no longer need to manually install and run additional instances; we only need to run the command and modify the desired number of replicas.

Of course, the service we deploy into Pods also needs to be stateless and horizontally scalable to take full advantage of K8s' capabilities.

External requests are sent to the service, which forwards the request to any one of the Pods, and the corresponding Pod handles the request.


This is a standalone discussion topic separated from the original post at https://juejin.cn/post/7368079196445130763