Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Taking Amazon's Elastic Kubernetes Service for a Spin

With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.

What is Elastic Kubernetes service (EKS)?

Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organizations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.

Key EKS concepts:

EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:

IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

Amazon's Elastic Kubernetes Service

Container Interface:  AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.  

EKS Container Interface

ELB Support:  We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.

Auto scaling:  The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.

Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.   

Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs Kubernetes with three masters across three Availability Zones to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

Cluster Shared Responsibility

Prerequisites for launching an EKS cluster:

1.  IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

IAM Role

2.  VPC for the cluster:  We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.

Launching an EKS cluster:

1.  Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.

NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

Launching EKS Cluster

2. Using awscli :

CODE: https://gist.github.com/velotiotech/a3d666f4b7bdd909cf0db1a5feaa6992.js

CODE: https://gist.github.com/velotiotech/e62030cf661c9364d8b20b62e7f5080b.js

In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:

CODE: https://gist.github.com/velotiotech/ca7aec4c4a44ffe74a7169c55f0cd4a0.js

Configure kubectl for EKS:

We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.

1.  Install the kubectl binary

CODE: https://gist.github.com/velotiotech/627d53bf7d686dabb4a4749c38d03653.js

Give executable permission to the binary.

CODE: https://gist.github.com/velotiotech/b7558242e3e7001acdaf6fde2163a9de.js

Move the kubectl binary to a folder in your system’s $PATH.

CODE: https://gist.github.com/velotiotech/d4a6fb998f7a87c116e3d3e8d66ecfca.js

As discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.

2.  Install aws-iam-authenticator

CODE: https://gist.github.com/velotiotech/e8eda394aee078a166949fa279c24746.js

Give executable permission to the binary

CODE: https://gist.github.com/velotiotech/f4b4821113c4ba7083a6bbc15b5db55b.js

Move the aws-iam-authenticator binary to a folder in your system’s $PATH.

CODE: https://gist.github.com/velotiotech/a538c25071e616180ad6844705e12129.js

3.  Create the kubeconfig file

First create the directory.

CODE: https://gist.github.com/velotiotech/95c03f7c144e20bfc6f879e1a609f550.js

Open a config file in the folder created above

CODE: https://gist.github.com/velotiotech/6678b7fda8ae0fc1886077ba84c0bce0.js

Paste the below code in the file

CODE: https://gist.github.com/velotiotech/c0526effa3197ab0aceb500f772228ce.js

Replace the values of the server and certificate-authority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.

CODE: https://gist.github.com/velotiotech/94fada21d32ae55bd3a4e95706067e02.js

Save and exit.

Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

CODE: https://gist.github.com/velotiotech/36a6d11c8510278290724b8110b861ba.js

To verify that the kubectl is now properly configured :

CODE: https://gist.github.com/velotiotech/a311209082a2c9d036c79617dffc3a80.js

Launch and configure worker nodes :

Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.

  • ClusterName: Name of the Amazon EKS cluster we created earlier.
  • ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.
  • NodeGroupName: Name for the worker node auto scaling group.
  • NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.
  • NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.
  • NodeInstanceType: Type of worker node you wish to launch.
  • NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively
  • KeyName: Name of the key you will use to ssh into the worker node.
  • VpcId: Id of the VPC that we created earlier.
  • Subnets: Subnets from the VPC we created earlier.
EKS Worker Nodes

To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.

Download the config map:

CODE: https://gist.github.com/velotiotech/2f6cfbbb7cbd43c80ed8bb70e9d5da43.js

Open it in an editor

CODE: https://gist.github.com/velotiotech/f3cb00d7a17a4f446dbe76e56985287d.js

Edit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply

CODE: https://gist.github.com/velotiotech/7690fece4b446c824429deee83300009.js

Now you can check if the nodes have joined the cluster or not.

CODE: https://gist.github.com/velotiotech/bb8d37260cb405dc43cb82d7da05ccae.js

Deploying an application:

As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.

1. MongoDB Deployment YAML

CODE: https://gist.github.com/velotiotech/e0781a86a408b584b6a684e6919ad799.js

2. Test Application Development YAML

CODE: https://gist.github.com/velotiotech/893c5f191d1e0d58001c134276e5a82e.js

3. MongoDB Service YAML

CODE: https://gist.github.com/velotiotech/d59a4a09dd6cc5dbf068359fd33a01fe.js

4. Test Application Service YAML

CODE: https://gist.github.com/velotiotech/2c424540ba7b89a7f60b166bc4d30fc1.js

Services

CODE: https://gist.github.com/velotiotech/1062531b4498acbd0bfaf2730c09b594.js

Deployments

CODE: https://gist.github.com/velotiotech/c9d1f06697f4a715ffe41d8e77dfe93d.js

In the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.

To Store Data :

CODE: https://gist.github.com/velotiotech/7c3aae6f8015035a091794a43677aa58.js

To Get Data :

CODE: https://gist.github.com/velotiotech/d15c811ea85845e7e6b4bd7e3d41fab0.js

We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

Deployment on EKS

Now our application is deployed on EKS and can be accessed by the users.

Comparison BETWEEN GKE, ECS and EKS:

Cluster creation: Creating GKE and ECS cluster is way simpler than creating an EKS cluster. GKE being the simplest of all three.

Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.

Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).

Serverless: ECS cluster can be created using Fargate which is container as a Service (CaaS) offering from AWS. Similarly EKS is also expected to support Fargate very soon.

In terms of availability and scalability all the services are at par with each other.

Conclusion:

In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.

References:

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Taking Amazon's Elastic Kubernetes Service for a Spin

With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.

What is Elastic Kubernetes service (EKS)?

Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organizations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.

Key EKS concepts:

EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:

IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

Amazon's Elastic Kubernetes Service

Container Interface:  AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.  

EKS Container Interface

ELB Support:  We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.

Auto scaling:  The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.

Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.   

Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs Kubernetes with three masters across three Availability Zones to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

Cluster Shared Responsibility

Prerequisites for launching an EKS cluster:

1.  IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

IAM Role

2.  VPC for the cluster:  We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.

Launching an EKS cluster:

1.  Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.

NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

Launching EKS Cluster

2. Using awscli :

CODE: https://gist.github.com/velotiotech/a3d666f4b7bdd909cf0db1a5feaa6992.js

CODE: https://gist.github.com/velotiotech/e62030cf661c9364d8b20b62e7f5080b.js

In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:

CODE: https://gist.github.com/velotiotech/ca7aec4c4a44ffe74a7169c55f0cd4a0.js

Configure kubectl for EKS:

We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.

1.  Install the kubectl binary

CODE: https://gist.github.com/velotiotech/627d53bf7d686dabb4a4749c38d03653.js

Give executable permission to the binary.

CODE: https://gist.github.com/velotiotech/b7558242e3e7001acdaf6fde2163a9de.js

Move the kubectl binary to a folder in your system’s $PATH.

CODE: https://gist.github.com/velotiotech/d4a6fb998f7a87c116e3d3e8d66ecfca.js

As discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.

2.  Install aws-iam-authenticator

CODE: https://gist.github.com/velotiotech/e8eda394aee078a166949fa279c24746.js

Give executable permission to the binary

CODE: https://gist.github.com/velotiotech/f4b4821113c4ba7083a6bbc15b5db55b.js

Move the aws-iam-authenticator binary to a folder in your system’s $PATH.

CODE: https://gist.github.com/velotiotech/a538c25071e616180ad6844705e12129.js

3.  Create the kubeconfig file

First create the directory.

CODE: https://gist.github.com/velotiotech/95c03f7c144e20bfc6f879e1a609f550.js

Open a config file in the folder created above

CODE: https://gist.github.com/velotiotech/6678b7fda8ae0fc1886077ba84c0bce0.js

Paste the below code in the file

CODE: https://gist.github.com/velotiotech/c0526effa3197ab0aceb500f772228ce.js

Replace the values of the server and certificate-authority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.

CODE: https://gist.github.com/velotiotech/94fada21d32ae55bd3a4e95706067e02.js

Save and exit.

Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

CODE: https://gist.github.com/velotiotech/36a6d11c8510278290724b8110b861ba.js

To verify that the kubectl is now properly configured :

CODE: https://gist.github.com/velotiotech/a311209082a2c9d036c79617dffc3a80.js

Launch and configure worker nodes :

Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.

  • ClusterName: Name of the Amazon EKS cluster we created earlier.
  • ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.
  • NodeGroupName: Name for the worker node auto scaling group.
  • NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.
  • NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.
  • NodeInstanceType: Type of worker node you wish to launch.
  • NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively
  • KeyName: Name of the key you will use to ssh into the worker node.
  • VpcId: Id of the VPC that we created earlier.
  • Subnets: Subnets from the VPC we created earlier.
EKS Worker Nodes

To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.

Download the config map:

CODE: https://gist.github.com/velotiotech/2f6cfbbb7cbd43c80ed8bb70e9d5da43.js

Open it in an editor

CODE: https://gist.github.com/velotiotech/f3cb00d7a17a4f446dbe76e56985287d.js

Edit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply

CODE: https://gist.github.com/velotiotech/7690fece4b446c824429deee83300009.js

Now you can check if the nodes have joined the cluster or not.

CODE: https://gist.github.com/velotiotech/bb8d37260cb405dc43cb82d7da05ccae.js

Deploying an application:

As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.

1. MongoDB Deployment YAML

CODE: https://gist.github.com/velotiotech/e0781a86a408b584b6a684e6919ad799.js

2. Test Application Development YAML

CODE: https://gist.github.com/velotiotech/893c5f191d1e0d58001c134276e5a82e.js

3. MongoDB Service YAML

CODE: https://gist.github.com/velotiotech/d59a4a09dd6cc5dbf068359fd33a01fe.js

4. Test Application Service YAML

CODE: https://gist.github.com/velotiotech/2c424540ba7b89a7f60b166bc4d30fc1.js

Services

CODE: https://gist.github.com/velotiotech/1062531b4498acbd0bfaf2730c09b594.js

Deployments

CODE: https://gist.github.com/velotiotech/c9d1f06697f4a715ffe41d8e77dfe93d.js

In the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.

To Store Data :

CODE: https://gist.github.com/velotiotech/7c3aae6f8015035a091794a43677aa58.js

To Get Data :

CODE: https://gist.github.com/velotiotech/d15c811ea85845e7e6b4bd7e3d41fab0.js

We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

Deployment on EKS

Now our application is deployed on EKS and can be accessed by the users.

Comparison BETWEEN GKE, ECS and EKS:

Cluster creation: Creating GKE and ECS cluster is way simpler than creating an EKS cluster. GKE being the simplest of all three.

Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.

Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).

Serverless: ECS cluster can be created using Fargate which is container as a Service (CaaS) offering from AWS. Similarly EKS is also expected to support Fargate very soon.

In terms of availability and scalability all the services are at par with each other.

Conclusion:

In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.

References:

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings