Ready to Experiment on a local EKS Instance in Minutes

Photo by Agê Barros on Unsplash

Do you like experiments? I like experiments and I like them a lot since they are a necessity to spark innovation. Bringing what is on your mind to reality in minutes is important if your brain works like mine: A thousand new ideas per second and a buffer memory which empties itself after a few minutes.

Of course you can always write down those ideas and enable them later. Right? But when exactly is this later? Don’t we all have tons of ideas written down and buried in the abysses of our cloud storage, local hard drives and even towers of paper? I tell you when the time comes for all those dusty diamonds.

The time will never come.

And why is that? Because time is the most important resource we have¹.

Experimenting with EKS

We are living in a time where Kubernetes is the de facto standard for clustered container orchestration. Amazon did us the favor of providing a managed solution for K8S, namely EKS, which takes care of installing and maintaining a K8S instance for us. The next step in this chain of making a DevOps live easier would be a single command setup to provide a running EKS cluster in minutes so we can start experimenting right away.

On 1. December 2020 Canonical and Amazon did it. They released the EKS snap which deploys a complete running EKS cluster node on any Ubuntu distribution supporting snaps.

This guide will show you how to install and configure an EKS cluster and kubectl so you can start experimenting with it.

How to setup an EKS cluster in minutes

For a cluster you need at least three nodes² on which you will deploy EKS. If you want to follow the guide with only your laptop, I recommend setting up three virtual machines with Vagrant. It will enable you to recreate the nodes in seconds if anything goes wrong.

Install the EKS cluster

On every of the three nodes install EKS as root user. The classic option ensures that snap has the necessary permissions to access all resources (no confinement) and the edge option is necessary since the eks snap is currently not in the stable repository. This might take some minutes so go grab a coffee if you want.

# snap install eks --classic --edge

Afterwards start the cluster on every node and check if the cluster is running.

# eks start
# eks status
eks is running
high-availability: no
datastore master nodes:
datastore standby nodes: none

Join the nodes to the cluster

Randomly select one of the three nodes. This will be your primary node. The other two nodes will be you secondary nodes.

On one of the secondary nodes run “eks add-node”. This command will provide you with multiple “eks join” commands. You need to select and copy the one with an ip address that is reachable from the primary node. If you are unsure, use ping from the primary node on every suggested ip.

# eks add-node
From the node you wish to join to this cluster, run the following:
eks join
If the node you are adding is not reachable through the default interface you can use one of the following:
eks join
eks join
eks join

Paste and run the selected join command into the primaries node’s prompt and wait until the connect procedure is over (this can take longer than the snap install).

# eks join
Contacting cluster at
Waiting for this node to finish joining the cluster. ..

Afterwards repeat the complete join step with your other secondary. If you have any problems with joining a node to the cluster, ensure that you run the add-node and join commands within a short time interval.

Congratulations, you have just setup your first full workable EKS cluster.

Configure kubectl to work with the cluster

As a last step you will configure kubectl to work with your cluster so you can start deploying your containers right away.

To get an overview about the configuration for kubectl, run “eks config” on your primary node.

# eks config
apiVersion: v1
- cluster:
name: microk8s-cluster
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
- name: admin
token: Y2R3bTRzUDdsRlhzY0pnbFJ2VnU2UVpmR21jYWJ5R2szRGI2TkpiakpOTT0K

The server parameter is used to configure the cluster in kubectl. The insecure-skip-tls-verify option is necessary to circumvent certificate errors.

By the way a disclaimer for disabling certificate checks:

# kubectl config set-cluster experimental \
--server= \

The token parameter is used to configure the admin user with which kubectl has permissions to do anything on the cluster.

# kubectl config set-credentials exp-admin \

Finally connect both cluster and user into a context and activate it.

# kubectl config set-context experimental \
--user=exp-admin \
# kubectl config use-context experimental

You can check the status of the cluster with “kubectl cluster-info”.

# kubectl cluster-info
Kubernetes master is running at
CoreDNS is running at
Metrics-server is running at
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

That’s it. You are now able to experiment with you own EKS cluster.

¹ Regarding to our current scientific knowledge and our interpretation of that knowledge time is moving always in the same direction and thus can not be stored or regenerated, making it the most valuable resource.
² One node is not a cluster. 2 nodes is a cluster which is not fail save against split brain and thus is also not a cluster.

Problemsolver & Lifehacker | Photo by Alin Rusu

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store