Here I'm sharing details on how to start your own in-house Kubernetes cluster using multiple Raspberry Pi computers. This can be beneficial for the development of your own projects and gives you a fun tool to play around with.
I've built my cluster with these methods and now using it as my personal server for hosting and practicing cloud development.
Term | Details |
---|---|
OS | RASPBIAN STRETCH LITE |
Cluster | multiple computers that are able to communicate with each otehr to accomplish given task |
Slave Node | a single computer running inside your cluster (in this case a single Raspberry Pi) |
Master Node | pretty much the same as a typical node but is responsible for gluing all of your cluster together and managing it's state |
Photos - https://photos.app.goo.gl/xIVB6uBk3uCoifJX2
- Download Raspbian Lite Operating System
https://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2017-12-01/
- Pick
zip
file, no need to unzip it
- Download balenaEtcher
https://www.balena.io/etcher/
- For each of your Raspberry Pi (this will erase all your data on the card!)
- Insert it's microSD card to your computer
- Burn the card with Raspian system using balenaEtcher
- After burning OS image on card, enter card directory and add
ssh
filetouch ssh
- this will allow us to connect to our nodes by ssh
- After adding this file, place card in your Raspberry Pi node and start it
- After you prepare all of your nodes, continue to the next step
- Open your shell
- If you are running Windows 10
- Open PowerShell as Administrator
- Run this command
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
- Reboot
- Install Ubuntu on Windows
- Reboot
- If you want to browse your files with Windows Explorer then it's probably located at
C:\Users\USER_NAME\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs
- If you are running Windows 10
- Move to some directory dedicated to this project
cd ~
- Clone this repository
git clone https://github.com/patrykkrawczyk/RPiClusterCloud
- Change into repository directory
cd RPiClusterCloud
- Find out your router IP address, usually it's 192.168.0.1
- Log in to your router administration panel, usually you can find credentials on the sticker on the back of your router
- Find out IP addresses of your Raspberry Pi nodes in your network
- Edit
ip_addresses.txt
file inRPiClusterCloud
directorynano resources/ip_addresses.txt
- Write all of the node IP addresses in separate lines
- Make sure to specify your Master Node IP address in the first line
- Make sure that these addresses don't collide with addresses specified in node_addresses.txt
Here we'll define static IP addresses that should be assigned to each node after Cluster setup is complete
- Edit
node_addresses.txt
file inRPiClusterCloud
directorynano resources/node_addresses.txt
- Write all of the desired static node IP addresses in separate lines
- Make sure to specify your Master Node desired static IP address in the first line
- Make sure that these addresses don't collide with addresses specified in ip_addresses.txt
- Edit
node_hostnames.txt
file inRPiClusterCloud
directorynano resources/node_hostnames.txt
- Write all of the desired node hostnames in separate lines
- Make sure to specify your Master Node desired hostname in the first line
-
ip_addresses
,node_addresses.txt
,node_hostnames.txt
has the same number of lines -
All rows within these files refer to the same node
ip_addresses node_addresses node_hostnames 192.168.0.204 192.168.0.101 rpinode01 192.168.0.205 192.168.0.102 rpinode02 192.168.0.202 192.168.0.103 rpinode03 Such configuration would result in 3 node cluster where
- rpinode01 is a Master Node with static IP 192.168.0.101
- rpinode02 is a Slave Node with static IP 192.168.0.102
- rpinode03 is a Slave Node with static IP 192.168.0.103
- Script assumes default Raspberry Pi credentials which are
pi / raspberry
- Execute
setup_cluster.sh
script withrouter_id
as argument andsudo
rightssudo ./scripts/setup_cluster.sh 192.168.0.1
- If anything goes wrong you can look at /var/log/setup_*.log files on each node file system