A Solar Powered Weather Station - Part 2

Introduction

I want to say first off that the weather station has been running for over a year and a half now with zero intervention on my part 🎉. This is not to say though that everything has been smooth sailing. I’ve actually had more issues with the API/Database side of things which is what this post is going to address. Ok, so the content of this post won’t actually address any problems (It most definitely will create several more) but I like to convince myself I am being productive. Anyways, lets deploy the weather station API on Kubernetes! 😱

Where To Even Begin…

I want to say this is not a tutorial but more of something I can look back on to see how I did everything when it breaks. Also maybe share my mistakes along the way to help save someone some time. A few months ago I deployed a kubernetes cluster on my home lab. This cluster consists of 5 virtual machines running Ubuntu 20.04. I configured one to be a control node and the remaining 4 to be workers. Suprisingly enough the setup of a vanilla cluster is fairly easy once you sift through the 1000s of kubernetes tutorials online to find the right one for whatever OS version/K8s combo you are running. From here the cluster sat for 2 months before I decided to do something with it. Bonus points if you get the naming scheme reference.

kadmin@palomar:~$ kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
albright   Ready    <none>                 36d   v1.22.2
bowline    Ready    <none>                 36d   v1.22.2
haylard    Ready    <none>                 36d   v1.22.2
palomar    Ready    control-plane,master   36d   v1.22.2
trilene    Ready    <none>                 36d   v1.22.2

Prepare Your Container Images

Since the storm cloud api was already containerized this made things much easier (https://github.com/BuckarewBanzai/Storm-Cloud/tree/master/API) but now I had to get this image into a registry somewhere. For the sake of learning and complexity I decided to host my own registry server on another vm I have. Thankfully this is extrememly simple to do.

docker run -d -p 5000:5000 --restart=always --name registry registry:2

All we have to do now is open the registry server port in the firewall and start pushing images! This configuration is incredibly insecure and only done for testing purposes. Always setup authentication on your registry servers if they are going to be used for anything besides testing. Now I had to prep the storm cloud api image for pushing to the registry. This was done with the commands below. In this case the registry server was on the same server the storm cloud api is hosted from so we can push our image to local host.

docker tag storm-cloud-api localhost:5000/storm-cloud-api
docker push localhost:5000/storm-cloud-api

Next we have to add our insecure registry to the docker daemon on all of the kubernetes nodes. Once again do not do this in production settings and always use authentication. To do this we have to edit the file at /etc/docker/daemon.json and add the following line:

"insecure-registries":["registry-ip:5000"]

Next all we have to do is restart the docker daemon on each node and we’re all set!

App Deployments And Services

Now our API image is in the registry and all the docker daemons are configured correctly we need to make a deployment yml file for our application. I copied the nginx exmaple from the kubernetes website and tweaked it for my own app. It is setup to deploy 3 replicas of the API and expose port 8081 on the kubernetes network. Then it is configured to pull the image from the private registry. All in all fairly simple.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: StormCloud-Deployment
  labels:
    app: Storm-Cloud-API
spec:
  replicas: 3
  selector:
    matchLabels:
      app: Storm-Cloud-API
  template:
    metadata:
      labels:
        app: Storm-Cloud-API
    spec:
      containers:
      - name: Storm-Cloud
        image: registry-ip:5000/storm-cloud-api
        ports:
        - containerPort: 8081

Next to deploy all we have to do is run:

kubectl apply -f stormcloud.yml

We can see more information and verify it is running across the cluster with the following commands:

kubectl describe deployments
kubectl get pods 

Now we will have to expose our app with a service definition. I am still new to kubernetes and am learning the best way to do this but for testing purposes this was the fastest way I found to get the app up and working. We can use the expose command with a type “NodePort” and run describe services to get our exposed listening port. This port will not be your application port but is bound to the container port on the kubernetes network.

kubectl expose deployment stormcloud-deployment --type=NodePort

Then run the following to see the exposed service NodePort. This is the port we will connect to using the worker IP to use our app! You will notice there are only 2 endpoints when the deployment yml stated 3 replicas. This is because one of my worker nodes is still down from a power outage a few days ago.

kadmin@palomar:~/applications/stormcloud$ kubectl describe services

Name:                     stormcloud-deployment
Namespace:                default
Labels:                   app=storm-cloud-api
Annotations:              <none>
Selector:                 app=storm-cloud-api
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.83.117
IPs:                      10.96.83.117
Port:                     <unset>  8081/TCP
TargetPort:               8081/TCP
NodePort:                 <unset>  31617/TCP
Endpoints:                10.244.1.8:8081,10.244.2.8:8081
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Now we can go to any of our worker IP addresses in a web browser and access the api at http://worker-ip:31617/events. Ideally you would have a single ingress point with a load balancer handing out requests to different workers (we’ll get there don’t worry). Also this setup is incredibly insecure as credentials are hard coded into the container image, the registry has no authentication, and ingress is not properly handled for the api. These are all things I would like to fix once I get more comfortable with the platform in addition to a proper ci/cd pipeline for the deploying the app. Stay tuned for updates and thanks for reading!

 
comments powered by Disqus