MicroK8s (Kubernetes) – Raspberry Pi Ubuntu Linux Basic Setup Guide – Part 2 (Build Your Own Image and Deploy It)

Kubernetes Linux

Log onto your master node via SSH.

We’re going to build an image and then deploy it, so as a developer you’d probably be developing on your own machine possibly running docker locally to test how your containers work before deploying the code to a repository where it can then be applied to a production Kubernetes deployment for consumption by users.

So we’re going to install Docker, and Python 3 to allow us to quickly create an image, test it works, then deploy to our MicroK8s cluster.

WARNING! The Raspberry Pi is an ARM based processor architecture, therefore you can only run images that are compiled for ARM, you can’t run x86/x64 architecture images.

Install Docker and Python

apt install docker.io

apt install python3

apt install python3-pip

Get a Sample Application

For this example we’re going to be using Jason Haley’s hello world Python Flask Application because its quite a neat little application to show how requirements can be used and how you can build an application with some dependencies.

Create a directory for it first under the “ubuntu” home directory.

mkdir ~/application

cd application

Let’s clone Jason’s Git Repository into the directory:

git clone https://github.com/JasonHaley/hello-python.git

cd hello-python/app

cd app

So now let’s install the requirements for Python as per what is in the requirements.txt file from the Git Repo.

pip3 install -r requirements.txt

Now run the application:

python3 main.py

Then you can either run the above as a background task by adding & to the end or open another SSH session to your master node then run the following, you get “Hello from Python!” back, the application is working!

ubuntu@k8s-master:~$ curl http://127.0.0.1:5000

Hello from Python!ubuntu@k8s-master:~$

So that looks to be working!

Create the Docker File

Okay now we know the application works, lets create a docker file to start to build our image.

cd hello-python/app

Create a file called “Dockerfile”, and put this in it:

FROM python:3.7

RUN mkdir /app

WORKDIR /app

ADD . /app/

RUN pip install -r requirements.txt

EXPOSE 5000

CMD ["python", "/app/main.py"]

Now create the image:

docker build -f Dockerfile -t hello-python:local .

Wait for it to build, once it has been built, run the following to list the Docker images:

docker image ls

As you can see there it is:

root@k8s-master:/home/ubuntu/apps/hello-python/app# docker image ls

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE

hello-python          local             b3d4b07093ba        5 seconds ago       874MB

Right quick explanation of the above, we created Docker image that includes the Python application wrapped up in Flask (so its a web site essentially), we then build the Docker file and add the tag “hello-python:local” (more about tags in a later guide). Also notice that the application is set within the Dockerfile to expose port 5000, what that means is that when the image is deployed in Kubernetes it will present port 5000 to within Kubernetes, we can then decide when presenting out the application what port we want used for the outside world.

docker run -p 5001:5000 hello-python:local

Now its running you should see something like:

# docker run -p 5001:5000 hello-python

 * Serving Flask app "main" (lazy loading)

 * Environment: production

   WARNING: This is a development server. Do not use it in a production deployment.

   Use a production WSGI server instead.

 * Debug mode: off

 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

Okay so now from another terminal run:

curl http://127.0.0.1:5001

And we see:

ubuntu@k8s-master:~$ curl http://127.0.0.1:5001

Hello from Python!

So what have we done, well, we’ve run the Docker image locally and got it to present on port 5001 to the application container’s port on 5000, now we’re ready to push this image into Kubernetes.

Push Docker Image into Kuerbetes Repository (Local Image Repository Method)

The image we created is known to Docker. However, Kubernetes is not aware of the newly built image. This is because your local Docker daemon is not part of the MicroK8s Kubernetes cluster. We can export the built image from the local Docker daemon and “inject” it into the MicroK8s image cache.

root@k8s-master:/home/ubuntu/apps/hello-python# docker image ls

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE

hello-python          latest              b3d4b07093ba        About an hour ago   874MB

(You don’t need to include the “:local”, it will always take the latest otherwise)

So first let’s export the docker image out:

# docker save hello-python > hello-python.tar

# microk8s ctr image import hello-python.tar

And push it directly into the image cache:

root@k8s-master:/home/ubuntu/apps/hello-python/app# microk8s ctr image import hello-python.tar

unpacking docker.io/library/hello-python:local (sha256:d84775f8b2406071344ceeb6a3007705dab7f7dae4b12727d26708902d007ab7)...done

Now check the image cache with the following and you should see it there ready for use:

microk8s ctr images ls

Deploying the Application into Kubernetes

To run on Kubernetes you can then create a file called deployment.yaml in the directory /app that you were in earlier and put in these contents:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: hello-python

spec:

  selector:

    matchLabels:

      app: hello-python

  replicas: 4

  template:

    metadata:

      labels:

        app: hello-python

    spec:

      containers:

      - name: hello-python

        image: hello-python:local

        imagePullPolicy: Never

        ports:

        - containerPort: 5000

Then create a file called service.yaml and put the following contents in:

apiVersion: v1

kind: Service

metadata:

  name: hello-python-service

spec:

  selector:

    app: hello-python

  ports:

  - port: 5000

  type: LoadBalancer

Here’s another example service.yaml file for you to experiement with, see if you can see what the difference is it makes!

apiVersion: v1

kind: Service

metadata:

  name: hello-python-service

spec:

  selector:

    app: hello-python

  ports:

  - port: 8080

    targetPort: 5000

  type: LoadBalancer

You can also run the creation of a service from the command line, infact you can do the same thing with the deployment of the image if you want, you don’t need to do these steps, if you’ve created the yaml files though, these are here just for reference, now we have declared the application and service definitions we need to apply them.

microk8s kubectl expose deploy hello-python --port 8080 --target-port 5000 --type LoadBalancer

microk8s kubectl expose deployment hello-python --type=LoadBalancer --name hello-python-service

So now let’s deploy our application to Kubernetes with:

kubectl apply -f deployment.yaml

kubectl apply -f service.yaml

Okay so that it is deployed, check with:

microk8s kubectl get pods

And we see our 4 replica pods running, after a few minutes for them to appear!

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get pods

NAME                            READY   STATUS    RESTARTS   AGE

hello-python-6bfc96894d-bt7rg   1/1     Running   0          11s

hello-python-6bfc96894d-jx7f9   1/1     Running   0          11s

hello-python-6bfc96894d-rqmjm   1/1     Running   0          11s

hello-python-6bfc96894d-zcxhg   1/1     Running   0          11s

Let’s check out the service too so run:

microk8s kubectl get services

And there it is:

root@k8s-master:/home/ubuntu/application/hello-python/app# microk8s kubectl get services

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE

hello-python-service   LoadBalancer   10.152.183.204   192.168.1.20   5000:30212/TCP   26s

kubernetes             ClusterIP      10.152.183.1     <none>         443/TCP          25m

What you can see is that our image/application is running and is exposed on IP address: 192.168.1.20 on port 5000.

You should now be able to access it on http://192.168.1.20:5000 from your web browser (i.e. from your workstation not from the Raspberry Pi itself!)

Remove the Application (Deployment and Service)

Let’s clean up what we’ve deployed, so run the following:

microk8s kubectl delete -f deployment.yaml

microk8s kubectl delete -f service.yaml

Additional Information

Leave a Reply

Your email address will not be published. Required fields are marked *