Ready to Experiment on a local EKS Instance in Minutes

Photo by Agê Barros on Unsplash

Do you like experiments? I like experiments and I like them a lot since they are a necessity to spark innovation. Bringing what is on your mind to reality in minutes is important if your brain works like mine: A thousand new ideas per second and a buffer memory which empties itself after a few minutes.

Of course you can always write down those ideas and enable them later. Right? But when exactly is this later? Don’t we all have tons of ideas written down and buried in the abysses of our cloud storage, local hard drives and even towers of paper? I tell you when the time comes for all those dusty diamonds.

The time will never come.

And why is that? Because time is the most important resource we have¹.

Experimenting with EKS

We are living in a time where Kubernetes is the de facto standard for clustered container orchestration. Amazon did us the favor of providing a managed solution for K8S, namely EKS, which takes care of installing and maintaining a K8S instance for us. The next step in this chain of making a DevOps live easier would be a single command setup to provide a running EKS cluster in minutes so we can start experimenting right away.

On 1. December 2020 Canonical and Amazon did it. They released the EKS snap which deploys a complete running EKS cluster node on any Ubuntu distribution supporting snaps.

This guide will show you how to install and configure an EKS cluster and kubectl so you can start experimenting with it.

How to setup an EKS cluster in minutes

For a cluster you need at least three nodes² on which you will deploy EKS. If you want to follow the guide with only your laptop, I recommend setting up three virtual machines with Vagrant. It will enable you to recreate the nodes in seconds if anything goes wrong.

On every of the three nodes install EKS as root user. The classic option ensures that snap has the necessary permissions to access all resources (no confinement) and the edge option is necessary since the eks snap is currently not in the stable repository. This might take some minutes so go grab a coffee if you want.

# snap install eks --classic --edge

Afterwards start the cluster on every node and check if the cluster is running.

# eks start
# eks status
eks is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none

Randomly select one of the three nodes. This will be your primary node. The other two nodes will be you secondary nodes.

On one of the secondary nodes run “eks add-node”. This command will provide you with multiple “eks join” commands. You need to select and copy the one with an ip address that is reachable from the primary node. If you are unsure, use ping from the primary node on every suggested ip.

# eks add-node
From the node you wish to join to this cluster, run the following:
eks join 192.168.121.143:25000/b109c7b9c935ff34c694f46eb2e868b1
If the node you are adding is not reachable through the default interface you can use one of the following:
eks join 192.168.121.143:25000/b109c7b9c935ff34c694f46eb2e868b1
eks join 172.28.128.184:25000/b109c7b9c935ff34c694f46eb2e868b1
eks join 10.1.5.0:25000/b109c7b9c935ff34c694f46eb2e868b1

Paste and run the selected join command into the primaries node’s prompt and wait until the connect procedure is over (this can take longer than the snap install).

# eks join 192.168.121.143:25000/b109c7b9c935ff34c694f46eb2e868b1
Contacting cluster at 192.168.121.143
Waiting for this node to finish joining the cluster. ..

Afterwards repeat the complete join step with your other secondary. If you have any problems with joining a node to the cluster, ensure that you run the add-node and join commands within a short time interval.

Congratulations, you have just setup your first full workable EKS cluster.

As a last step you will configure kubectl to work with your cluster so you can start deploying your containers right away.

To get an overview about the configuration for kubectl, run “eks config” on your primary node.

# eks config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBVENDQWVtZ0F3SUJBZ0lKQVBFV3hKU1BLTGFHTUEwR0NTc...hNllSOUUra0RabDBKS0crRGdJQ3BjMW9TanUKVG9GZjBmST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.121.142:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: Y2R3bTRzUDdsRlhzY0pnbFJ2VnU2UVpmR21jYWJ5R2szRGI2TkpiakpOTT0K

The server parameter is used to configure the cluster in kubectl. The insecure-skip-tls-verify option is necessary to circumvent certificate errors.

By the way a disclaimer for disabling certificate checks:
NEVER DO THIS IN PRODUCTION!

# kubectl config set-cluster experimental \
--server=https://192.168.121.45:16443 \
--insecure-skip-tls-verify=true

The token parameter is used to configure the admin user with which kubectl has permissions to do anything on the cluster.

# kubectl config set-credentials exp-admin \
--token=cWZjZUV6QVpEaWZlbzBoUzQxdzJocnZiTHhESStwLzNzbk5aQ09BWnVzYz0K

Finally connect both cluster and user into a context and activate it.

# kubectl config set-context experimental \
--user=exp-admin \
--cluster=experimental
# kubectl config use-context experimental

You can check the status of the cluster with “kubectl cluster-info”.

# kubectl cluster-info
Kubernetes master is running at https://192.168.121.45:16443
CoreDNS is running at https://192.168.121.45:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.121.45:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

That’s it. You are now able to experiment with you own EKS cluster.

¹ Regarding to our current scientific knowledge and our interpretation of that knowledge time is moving always in the same direction and thus can not be stored or regenerated, making it the most valuable resource.
² One node is not a cluster. 2 nodes is a cluster which is not fail save against split brain and thus is also not a cluster.

Problemsolver & Lifehacker | Photo by Alin Rusu