2016 is over. In Christmas 2015 I did something really useful: Prepare some tech recipes based on something that is was curious about: Containers. This year I would like to repeat the experience. Next station: Kubernetes. In this years’ Christmas tech recipe I am going to cook a 5 x Raspberry Pi 3 Kubernetes cluster. Come in and take a seat. Everything is in place and ready to cook. You’ll need some hardware and 2 hours and 20 minutes of your time.

img_3755

Introduction, motivation and credits

Getting back home from one of my business trips, I was thinking about how can easily explain concepts such containers or cloud computing. As such concepts are “new”, and very abstract people tend to reject them and prefer to stay with the old fashioned application server or IIS. Manually operated (with all the cons that this implies). But cloud computing sounds like something magic… and people don’t like magic. So it is a difficult fight. So I thought about having something I can carry on my bag and travel with. Put it on a table, turn it on and demo a private cloud. Explain the concepts and show there is no magic, but a real chance to improve how applications are design, implemented and operated.

On the other hand, I’ve been working with my colleagues from everis Zaragoza in some internal stuff. Juan Larriba (@compilemymind), my dear Kubemeister, opened me the door to this exciting world called Kubernetes. Free. Open Source. From Google (And recently adopted by Microsoft in Azure under the Azure Container Service stack).

So, why not? A Kubernetes cluster in a Raspberry Pi 3 I can carry and demo all around the world.

A fast pray to Google thrown results immediately. And the result was obvious: I was not the first with that idea:

So thanks to Arjen Wassikin and Ray Tsang (@saturnism) for their shopping list, thoughts and the great video posted on the kubernetes blog.
Also remarkable the of Lucas Käldström (@luxas) and its kubernetes-on-arm project, that been officially merged to the official Kubernetes release. And is also collaborating in other interesting tools like kubeadm. Don’t miss Lucas’ history in this blog post. Congratulations.

During this Christmas recipe I have been playing with several options. However, all where outdated, manual and did not work after all. The best starting point I found was  Roland Huß‘s Ansible playbook. So I pick up his work, pulled out what I though not necessary, and then implemented an automated install following the step-by-step guide from kubeadm

You can find the result (and will need to clone it) at my GitHub repository kubernetes-raspberry-pi

All said, it is time to get back to work.

Ingredients

This recipe is not cheap at all. You will need some hardware. But the result worths it:

AmountPartPrice
5[Raspberry Pi 3](http://amzn.eu/0Gxy4ku)5 * 39 EUR
5[Micro SD Card 32 GB](http://amzn.eu/5IMqzRx)5 * 11 EUR
1[WLAN Router](http://amzn.eu/3Zzxmpt)24 EUR
1[100 MB Switch](http://amzn.eu/ixBjdx3)12 EUR
5Micro-USB wires5 * 1 EUR
1[Power Supply](https://www.modmypi.com/raspberry-pi/accessories/usb-hubs/anidees-6-port-smart-ic-usb-charger-50-watt/)43 EUR
3[Multi stackable case](https://www.modmypi.com/raspberry-pi/cases/multi-pi-stacker/multi-pi-stackable-raspberry-pi-case/)3 * 16 EUR
1[Bolt pack](https://www.modmypi.com/raspberry-pi/cases/multi-pi-stacker/multi-pi-stackable-raspberry-pi-case-bolt-pack/)6 EUR
And some open source tools you need to install:
NameOriginURL
HypriotOSGitHub[hypriot/rpi-image-builder](https://github.com/hypriot/image-builder-rpi/releases/)
flashGitHub[hypriot/flash](https://github.com/hypriot/flash)
AnsibleWebsite[Ansible installation guide](http://docs.ansible.com/ansible/intro_installation.html)
## What you’ll get

It is really cool, doesn’t it?

img_3758

Assembling the toy (2 hours)

You will find the assembly quite straightforward. Bolt here and there. And not so much. Only two hacks that we have applied:

  1. We have removed the switch case, made some custom holes in the chasis and mount it with bolts, like the rest of the pi. This way it fits perfect within the structure.
  2. The have followed a similar process with the WLAN router (the small box at the top). Unmount, hole, bolt but, in this case, close the case again. The holes in the board where not suitable to fix it directly. So we preferred to make the hole in the case and left it like this.

Both router and switch are tight mounted.

img_3756

Installing Kubernetes (20 minutes)

Important: Please not that this playbook uses kubeadm. Kubeadm is currently in alpha state. I will try to periodically check that everything still works as the kubeadm team release the final version.

After days of research, try/fail, surf and so on, I finally found a clean and fast way to install Kubernetes on ARM: kubeadm. I made a first manual installation to check that everything was ok and the installation was smooth. And it was. So as mentioned before, I took Roland Huß‘s Ansible playbook and adjusted it to work with kubeadm. So,

Flash all SD cards

First of all, download the latest hypriotos image from GitHub. One downloaded, open your Terminal ,cd to where te image is and, using hypriot’s flash utility, flash all 5 SD cards with HypriotOS operating system:

$ sudo flash hypriotos-rpi-v1.1.3.img

In 50 seconds more o less, one SD is ready.

Now, put all the SD cards on the Pis, and turn it on.

Setup the router

The router is very important. For demo purposes, I have configured it in Hotspot mode:

  1. It connects to any existing WIFI network, and shares it with all computers connect either by LAN or WLAN
  2. It creates a new WIFI network, so you can connect to the cluster with your computer’s or your mobile phone WIFI.
  3. It acts as DHCP server

To setup it, connect to it (my original WIFI Network was TP-LINK_3224) and browse http://tplinkwifi.net (or http://192.168.0.1)

I strongly recommend you to, first of all, reconfigure the DHCP server to point out of the normal LAN IP range. I chose 172.16.X.X:

  1. Go to DHCP > DHCP Settings, an fill in the settings as follows:

dhcp-setup1

2. With all Pi flashed and connected to the network, go to DHCP > Client list and take note of their MAC addresses:

dhcp-setup3

  1. Go to DHCP > Address reservation, and set an static (Reserved) IP address to each Pi. (The IP is important, because the Ansible playbook makes 172.16.0.10 = master, 172.16.0.11 node1… and so on).

dhcp-setup2

With the DHCP in place, is time for the Hotpost. The router includes a Quick Setup (that probably you’ve already completed in Router mode in first place). Anyway, you can rerun this setup again from the Quick Setup menu:

  1. In operation mode. Choose “Hotspot router”

hotspot-setup1

  1. Dynamic IP will be the most common if your main network has a DCHP server.

hotspot-setup2

  1. Look for the WIFI network you want to connect the hotspot to. This will be your gateway to Internet:

hotspot-setup3

  1. Fill in the hotspot WIFI information (the network with internet you want to connect to)

hotspot-setup4

  1. Ready to go (you may also want to change the Wireless Network Name (SSID) you share. It is done in the “Wireless” section):

    hotspot-setup5

    Reboot all the Raspberry Pi, so the DHCP assigns the newly defined static IP, and we are ready to deploy the software (You may check the IP is properly configured getting back to DHCP > DHCP Client List).

Getting the Ansible playbook

You should download, fork or whatever (pull requests welcome!)  from https://github.com/sesispla/kubernetes-raspberry-pi :

$ git clone https://github.com/sesispla/kubernetes-raspberry-pi.git

Now, copy the following files:

$ cp hosts.example hosts $ cp config.example.yml config.yml

  • hosts is the file from Ansible will read the Pi address to configure
  • config.yml includes some configuration values I moved out the playbooks to make things easier

Edit hosts with your favourite text editor:

[pis] master.cluster.local name=master.cluster.local node1.cluster.local name=node1.cluster.local node2.cluster.local name=node2.cluster.local node3.cluster.local name=node3.cluster.local node4.cluster.local name=node4.cluster.local [master] master.cluster.local [nodes] node1.cluster.local node2.cluster.local node3.cluster.local node4.cluster.local

Under “Pis”, set all your Raspberry Pi devices you want to configure.
Under “Master”, set one (and only one) Pi that will act as master.
Under “Nodes”, set all the remaining Pi that will act as cluster nodes.

Important: Please note that I am using DNS name instead of IP addresses. I created 5 DNS entries in my /etc/hosts files, targeting every hostname to one of the static IP addresses.

My /etc/hosts file looks like this:

172.16.0.10 kubernetes 172.16.0.10 master.cluster.local 172.16.0.11 node1.cluster.local 172.16.0.12 node2.cluster.local 172.16.0.13 node3.cluster.local 172.16.0.14 node4.cluster.local

Finally, open and edit the config.yml file:

timezone: "Europe/Madrid" # Your timezone arch: arm # The target architecture. Only tested with ARM, but might work for amd64 and so on. token: 506f2b.1a51ab3d42ed9d10 # A random value, as defined in kubeadm documentation. master: master.cluster.local # The cluster master IP or hostname cidr: 10.244.0.0/16 # The cidr, as Flannel expects to be reset: false # Invokes a kubeadm reset before installing (experimental)

If you followed all steps and created a DNS entry in /etc/hosts this will be fine for you. Otherwise, please change your master ip/hostname.

Setting up all the Pis

We are almost at the end. Now, execute the playbook that makes all the Raspberry Pi base setup. It will take a while, but is worth it. You can check all the actions this playbook:

$ ansible-playbook -k -i hosts setup.yml SSH password: hypriot PLAY [Setup Raspberry Pi Base system] ****************************************** TASK [setup] ******************************************************************* paramiko: The authenticity of host 'node2.kube.lan' can't be established. The ssh-rsa key fingerprint is a807079fa091708edafd496539e00751. Are you sure you want to continue connecting (yes/no)? yes paramiko: The authenticity of host 'node1.kube.lan' can't be established. The ssh-rsa key fingerprint is 31e1736f0b9bbcf7e507798acdc2c6a2. Are you sure you want to continue connecting (yes/no)? yes paramiko: The authenticity of host 'master.kube.lan' can't be established. The ssh-rsa key fingerprint is 286f287c6cfc4cb51d0c134f1396cd70. Are you sure you want to continue connecting (yes/no)? yes paramiko: The authenticity of host 'node3.kube.lan' can't be established. The ssh-rsa key fingerprint is ff8b0956fd8e7d7e1c291f1b0237b086. Are you sure you want to continue connecting (yes/no)? yes ok: [node2.kube.lan] ok: [node1.kube.lan] ok: [master.kube.lan] ok: [node3.kube.lan] TASK [base : Add apt-transport-https] ****************************************** ok: [node3.kube.lan] ok: [node2.kube.lan] ok: [node1.kube.lan] ok: [master.kube.lan] TASK [base : Add Hypriot Repo Key] ********************************************* ok: [node3.kube.lan] ok: [master.kube.lan] ok: [node2.kube.lan] ok: [node1.kube.lan] TASK [base : Add Hypriot Repo] ************************************************* changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [node1.kube.lan] changed: [master.kube.lan] TASK [base : Add Kubernetes Repo Key] ****************************************** changed: [master.kube.lan] changed: [node1.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Add Kubernetes Repo] ********************************************** changed: [master.kube.lan] changed: [node3.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] TASK [base : Update APT package cache] ***************************************** changed: [node1.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [master.kube.lan] TASK [base : Install Packages] ************************************************* changed: [node2.kube.lan] => (item=[u'kubelet', u'kubeadm', u'kubectl', u'kubernetes-cni']) changed: [node1.kube.lan] => (item=[u'kubelet', u'kubeadm', u'kubectl', u'kubernetes-cni']) changed: [master.kube.lan] => (item=[u'kubelet', u'kubeadm', u'kubectl', u'kubernetes-cni']) changed: [node3.kube.lan] => (item=[u'kubelet', u'kubeadm', u'kubectl', u'kubernetes-cni']) TASK [base : Set swap_file variable] ******************************************* ok: [master.kube.lan] ok: [node1.kube.lan] ok: [node2.kube.lan] ok: [node3.kube.lan] TASK [base : Check if swap file exists] **************************************** ok: [node3.kube.lan] ok: [node2.kube.lan] ok: [master.kube.lan] ok: [node1.kube.lan] TASK [base : Create swap file] ************************************************* changed: [node1.kube.lan] changed: [node3.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] TASK [base : Change swap file permissions] ************************************* changed: [node3.kube.lan] changed: [node1.kube.lan] changed: [node2.kube.lan] changed: [master.kube.lan] TASK [base : Format swap file] ************************************************* changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node3.kube.lan] changed: [node2.kube.lan] TASK [base : Write swap entry in fstab] **************************************** changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node3.kube.lan] changed: [node2.kube.lan] TASK [base : Turn on swap] ***************************************************** changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Set swappiness] *************************************************** changed: [node1.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [master.kube.lan] TASK [base : Add cgroup for Memory limits to bootparams] *********************** changed: [master.kube.lan] changed: [node1.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Add overlay filesystem module] ************************************ changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node3.kube.lan] changed: [node2.kube.lan] TASK [base : Load overlay module] ********************************************** ok: [node2.kube.lan] ok: [node3.kube.lan] ok: [master.kube.lan] ok: [node1.kube.lan] TASK [base : Setup profile] **************************************************** changed: [node3.kube.lan] changed: [master.kube.lan] changed: [node1.kube.lan] changed: [node2.kube.lan] TASK [base : Set timezone variables] ******************************************* changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : update timezone] ************************************************** changed: [master.kube.lan] changed: [node3.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] TASK [base : Add hosts] ******************************************************** changed: [node3.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] changed: [master.kube.lan] TASK [base : stat] ************************************************************* ok: [node1.kube.lan] ok: [node2.kube.lan] ok: [master.kube.lan] ok: [node3.kube.lan] TASK [base : Copy hosts to DHCP setup] ***************************************** changed: [node1.kube.lan] changed: [node3.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] TASK [base : Add DHCP hook to copy etc hosts] ********************************** changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [master.kube.lan] changed: [node1.kube.lan] TASK [base : Set hostname] ***************************************************** changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Restart hostname] ************************************************* changed: [master.kube.lan] changed: [node1.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Check for ld-linux-armhf.so.3] ************************************ ok: [node1.kube.lan] ok: [master.kube.lan] ok: [node3.kube.lan] ok: [node2.kube.lan] TASK [base : Link ld-linux-armhf.so.3 --> ld-linux.so.3] *********************** ok: [node3.kube.lan] ok: [master.kube.lan] ok: [node1.kube.lan] ok: [node2.kube.lan] TASK [base : stat] ************************************************************* ok: [master.kube.lan] ok: [node2.kube.lan] ok: [node1.kube.lan] ok: [node3.kube.lan] TASK [base : Set hostname in boot configuration] ******************************* changed: [node3.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] TASK [base : Add useDns=no to /etc/ssh/sshd_config] **************************** changed: [node2.kube.lan] changed: [node1.kube.lan] changed: [node3.kube.lan] changed: [master.kube.lan] TASK [base : Restart hostname] ************************************************* changed: [master.kube.lan] changed: [node3.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] TASK [base : Configure WiFi] *************************************************** skipping: [master.kube.lan] skipping: [node1.kube.lan] skipping: [node2.kube.lan] skipping: [node3.kube.lan] TASK [base : Switch off power management for WiFi] ***************************** skipping: [node1.kube.lan] skipping: [master.kube.lan] skipping: [node2.kube.lan] skipping: [node3.kube.lan] TASK [base : Add a group pi] *************************************************** changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [node1.kube.lan] changed: [master.kube.lan] TASK [base : Add a group docker] *********************************************** ok: [master.kube.lan] ok: [node1.kube.lan] ok: [node2.kube.lan] ok: [node3.kube.lan] TASK [base : Add user pi to group docker] ************************************** changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [node1.kube.lan] TASK [base : Add pi to to sudoers] ********************************************* changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [node1.kube.lan] TASK [base : Verify ~/.ssh] **************************************************** changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] changed: [node3.kube.lan] TASK [base : Copy SSH Key] ***************************************************** changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Add user pi to group docker] ************************************** ok: [master.kube.lan] ok: [node3.kube.lan] ok: [node2.kube.lan] ok: [node1.kube.lan] TASK [base : Verify ~/.ssh] **************************************************** changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node1.kube.lan] changed: [node3.kube.lan] TASK [base : Copy SSH Key] ***************************************************** changed: [node1.kube.lan] changed: [master.kube.lan] changed: [node2.kube.lan] changed: [node3.kube.lan] TASK [base : Add user pi to group docker] ************************************** ok: [master.kube.lan] ok: [node3.kube.lan] ok: [node2.kube.lan] ok: [node1.kube.lan] TASK [base : Set user password] ************************************************ skipping: [master.kube.lan] skipping: [node1.kube.lan] skipping: [node2.kube.lan] skipping: [node3.kube.lan] PLAY RECAP ********************************************************************* master.kube.lan : ok=41 changed=29 unreachable=0 failed=0 node1.kube.lan : ok=41 changed=29 unreachable=0 failed=0 node2.kube.lan : ok=41 changed=29 unreachable=0 failed=0 node3.kube.lan : ok=41 changed=29 unreachable=0 failed=0

Please, note that as time I made this setup node 4 was not ready yet. I ran later all the setup with all 5 nodes.

Creating the Kubernetes master

Never has been that easy:

$ ansible-playbook -i hosts master.yml PLAY [Kubernetes master configuration] ***************************************** TASK [setup] ******************************************************************* ok: [master.kube.lan] TASK [master : Reset Kubernetes installation] ********************************** skipping: [master.kube.lan] TASK [master : Initialize Kubernetes cluster for ARM with flannel support] ***** changed: [master.kube.lan] TASK [master : Install Networking Pods] **************************************** changed: [master.kube.lan] TASK [master : Download cluster configuration] ********************************* changed: [master.kube.lan] PLAY RECAP ********************************************************************* master.kube.lan : ok=5 changed=3 unreachable=0 failed=0

Internally, it performs a kubeadm init and install flannel, as described in the kubeadm documentation

Joining the nodes

Again, very easy:

$ ansible-playbook -i hosts nodes.yml PLAY [Kubernetes nodes configuration] ****************************************** TASK [setup] ******************************************************************* ok: [node1.kube.lan] ok: [node2.kube.lan] ok: [node3.kube.lan] TASK [node : Reset Kubernetes installation] ************************************ skipping: [node3.kube.lan] skipping: [node1.kube.lan] skipping: [node2.kube.lan] TASK [node : Adding node to cluster master.kube.lan] *************************** changed: [node2.kube.lan] changed: [node3.kube.lan] changed: [node1.kube.lan] TASK [node : debug] ************************************************************ ok: [node1.kube.lan] => { "out.stdout_lines": [ "[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.", "[preflight] Running pre-flight checks", "[tokens] Validating provided token", "[discovery] Created cluster info discovery client, requesting info from "http://master.kube.lan:9898/cluster-info/v1/?token-id=c9eb81"", "[discovery] Failed to request cluster info, will try again: [Get http://master.kube.lan:9898/cluster-info/v1/?token-id=c9eb81: EOF]", "[discovery] Cluster info object received, verifying signature using given token", "[discovery] Cluster info signature and contents are valid, will use API endpoints [https://172.16.0.10:6443]", "[bootstrap] Trying to connect to endpoint https://172.16.0.10:6443", "[bootstrap] Detected server version: v1.5.1", "[bootstrap] Successfully established connection with endpoint "https://172.16.0.10:6443"", "[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request", "[csr] Received signed certificate from the API server:", "Issuer: CN=kubernetes | Subject: CN=system:node:node1.kube.lan | CA: false", "Not before: 2016-12-28 13:17:00 +0000 UTC Not After: 2017-12-28 13:17:00 +0000 UTC", "[csr] Generating kubelet configuration", "[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"", "", "Node join complete:", "* Certificate signing request sent to master and response", " received.", "* Kubelet informed of new secure connection details.", "", "Run 'kubectl get nodes' on the master to see this machine join." ] } ok: [node2.kube.lan] => { "out.stdout_lines": [ "[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.", "[preflight] Running pre-flight checks", "[tokens] Validating provided token", "[discovery] Created cluster info discovery client, requesting info from "http://master.kube.lan:9898/cluster-info/v1/?token-id=c9eb81"", "[discovery] Failed to request cluster info, will try again: [Get http://master.kube.lan:9898/cluster-info/v1/?token-id=c9eb81: EOF]", "[discovery] Cluster info object received, verifying signature using given token", "[discovery] Cluster info signature and contents are valid, will use API endpoints [https://172.16.0.10:6443]", "[bootstrap] Trying to connect to endpoint https://172.16.0.10:6443", "[bootstrap] Detected server version: v1.5.1", "[bootstrap] Successfully established connection with endpoint "https://172.16.0.10:6443"", "[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request", "[csr] Received signed certificate from the API server:", "Issuer: CN=kubernetes | Subject: CN=system:node:node2.kube.lan | CA: false", "Not before: 2016-12-28 13:17:00 +0000 UTC Not After: 2017-12-28 13:17:00 +0000 UTC", "[csr] Generating kubelet configuration", "[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"", "", "Node join complete:", "* Certificate signing request sent to master and response", " received.", "* Kubelet informed of new secure connection details.", "", "Run 'kubectl get nodes' on the master to see this machine join." ] } ok: [node3.kube.lan] => { "out.stdout_lines": [ "[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.", "[preflight] Running pre-flight checks", "[tokens] Validating provided token", "[discovery] Created cluster info discovery client, requesting info from "http://master.kube.lan:9898/cluster-info/v1/?token-id=c9eb81"", "[discovery] Failed to request cluster info, will try again: [Get http://master.kube.lan:9898/cluster-info/v1/?token-id=c9eb81: EOF]", "[discovery] Cluster info object received, verifying signature using given token", "[discovery] Cluster info signature and contents are valid, will use API endpoints [https://172.16.0.10:6443]", "[bootstrap] Trying to connect to endpoint https://172.16.0.10:6443", "[bootstrap] Detected server version: v1.5.1", "[bootstrap] Successfully established connection with endpoint "https://172.16.0.10:6443"", "[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request", "[csr] Received signed certificate from the API server:", "Issuer: CN=kubernetes | Subject: CN=system:node:node3.kube.lan | CA: false", "Not before: 2016-12-28 13:17:00 +0000 UTC Not After: 2017-12-28 13:17:00 +0000 UTC", "[csr] Generating kubelet configuration", "[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"", "", "Node join complete:", "* Certificate signing request sent to master and response", " received.", "* Kubelet informed of new secure connection details.", "", "Run 'kubectl get nodes' on the master to see this machine join." ] } PLAY RECAP ********************************************************************* node1.kube.lan : ok=3 changed=1 unreachable=0 failed=0 node2.kube.lan : ok=3 changed=1 unreachable=0 failed=0 node3.kube.lan : ok=3 changed=1 unreachable=0 failed=0

And that’s all! Now we have a Kubernetes cluster up and running, with 1 master and X nodes (3, 4… or whatever, no matter the number the script will join them all)

Check the nodes

We might run a simple check with kubectl:

$ kubectl get nodes NAME STATUS AGE master.cluster.local Ready 1d node1.cluster.local Ready 1d node2.cluster.local Ready 1d node3.cluster.local Ready 1d node4.cluster.local Ready 1d

And, of course, deploy your first application!

$ kubectl run busybox1 --image hypriot/rpi-busybox-httpd --port 80 deployment "busybox1" created $ kubectl expose deployment busybox1 --port=80 --target-port=80 --type=NodePort service "busybox1" exposed $ kubectl describe service busybox1 Name: busybox1 Namespace: default Labels: run=busybox1 Selector: run=busybox1 Type: NodePort IP: 10.111.160.108 Port: 80/TCP NodePort: 32651/TCP Endpoints: 10.244.3.10:80 Session Affinity: None $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE busybox1-3730752487-hr3lg 1/1 Running 0 1m 10.244.3.10 node3.cluster.local

With this information, you can open your browser, and navigate to http://NODE:NodePort (In my case, http://node3.cluster.local:32651)

rpi-busybox-httpd

We’re done with the recipe!

OPTIONAL: Connecting remotely using kubectl in your machine

This playbook also copies the kubectl config file to “./kubeconfig/clustername/etc/kubernetes”. You probably would like to copy it to ${HOME}/.kube/config
This will make kubectl target to this cluster by default.

OPTIONAL: Using kubectl proxy to access Kubernetes Dashboard

The “master” playbook include the “Dashboard” role, that automatically install the Kubernetes Dashboard for you. If you want to access to it, you’ll need to have kubectl configured in your machine, and launch:

$ kubectl proxy Starting to serve on 127.0.0.1:8001

Now, open surf to http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#/workload?namespace=default

The Dashboard should appear without problems:

dashboard

Thank you so much for your time.

Next station. Cross-compile .NET Core to run in Kubernetes in ARM 🙂