MicroK8s (Kubernetes) – Raspberry Pi Ubuntu Linux Basic Setup Guide – Part 3 (Further Tasks)

Kubernetes Linux

Well Part 2 was very long, so lets have a shorter one and cover how you can adjust your application on the fly and add the other worker nodes to the cluster.

Updating an Application or Service on the Fly

Kubernetes is a system whereby you declare what you want the “world” to look like, if you make a change to the file you can then say I want the “world” to look like this now, and Kubernetes will adjust the “world” to look like that.

Let’s change the service file so we’re adverting the port on 8080 rather than 8000, change the service.yaml file:

apiVersion: v1

kind: Service

metadata:

  name: hello-python-service

spec:

  selector:

    app: hello-python

  ports:

  - port: 8080

    targetPort: 5000

  type: LoadBalancer

Then apply the changes.

microk8s kubectl apply -f service.yaml

And run a:

microk8s kubectl get services

And as you can see, its now on port 8080.

root@k8s-master:/home/ubuntu/application/hello-python/app# kubectl get services

NAME                         TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE

hello-python-service         LoadBalancer   10.152.183.34    192.168.1.20   8080:30740/TCP   37h

kubernetes                   ClusterIP      10.152.183.1     <none>         443/TCP          38h

Now go to http://192.168.1.20:8080, and you’ll find that is now where the site is accessible from.

Adding a Worker Nodes to the Cluster

So currently we only have a single node within the cluster (the master node). We want to add some more nodes to act as worker nodes, we’ll call these k8s-worker-01 and k8s-worker-02. Once you have them built and on the network, we can then continue with adding them into the Microk8s cluster.

First we need to install microk8s on each of the two new worker nodes.

snap install microk8s --classic

Once installed on all the nodes,you now need to run this command on our master node (k8s-master).

microk8s.add-node

You’ll see a token on the screen, make a note of this, you’ll need it for the next worker nodes, so lets logon to each worker node and run:

microk8s.join <master_ip>:<port>/<token>

You may also need to enable your firewall rules and exceptions on the Master and all the Worker nodes to allow communication to work properly.

sudo ufw allow in on cni0 && sudo ufw allow out on cni0

sudo ufw default allow routed

Now let’s take a look at our new 3 node cluster with:

microk8s kubectl get nodes

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get nodes

NAME            STATUS     ROLES    AGE   VERSION

k8s-master      Ready      <none>   38h   v1.18.6-1+b4f4cb0b7fe3c1

k8s-worker-01   NotReady   <none>   66s   v1.18.6-1+b4f4cb0b7fe3c1

k8s-worker-02   NotReady   <none>   7s    v1.18.6-1+b4f4cb0b7fe3c1

After a few minutes, all being well you should see:

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get nodes

NAME            STATUS   ROLES    AGE   VERSION

k8s-master      Ready    <none>   38h   v1.18.6-1+b4f4cb0b7fe3c1

k8s-worker-01   Ready    <none>   11m   v1.18.6-1+b4f4cb0b7fe3c1

k8s-worker-02   Ready    <none>   10m   v1.18.6-1+b4f4cb0b7fe3c1

Our 3 node cluster is now ready for action. Now when we deploy applications we will find the pods spreading out across all the available nodes.

For now go and redeploy the application from Part 2, and see what happens when you run the below:

microk8s kubectl get pods -o wide

What do you notice about the column reporting the host node for the pod?

Leave a Reply

Your email address will not be published. Required fields are marked *