Setting up AWS EKS on Fargate
This year at AWS Re:Invent, the AWS container engineering team released a feature that I'd been eagerly awaiting. AWS EKS on Fargate.
EKS on Fargate is an expansion of AWS's Managed Kubernetes Service, but removes the need to spin up and manage your own EC2 instances and autoscaling groups. This is a big deal for a couple of reasons.
-
Fargate eliminates the need for customers to create or manage EC2 instances for their Amazon EKS clusters. Customers no longer have to worry about patching, scaling, or securing a cluster of EC2 instances to run Kubernetes applications in the cloud.
-
Using Fargate, customers define and pay for resources at the pod-level. Which allows teams to understand the true cost of their workloads at the pod level rather than at the VM level.
I spent some time this weekend moving this blog to a Fargate powered EKS cluster and will do my best to describe the steps of spinning this up, and some of the common gotchas.
This post doesn't go over what Kubernetes is, how it works, or the basics of deploying a website to a kubernetes cluster. If you want to learn more about Kubernetes, I recommend Kubernetes The Hard Way by Kelsey Hightower
Getting Started
Set up your AWS account, and ensure that you have the awscli tool installed.
1. Configure your AWS credentials. The aws cli tool requires that your local credentials be configured in your environment.
$ aws configure
AWS Access Key ID [None]: YOUR_ACCESS_KEY_ID
AWS Secret Access Key [None]: YOUR_SECRET_ACCESS_KEY
Default region name [None]: us-east-2
Default output format [None]: json
Installing eksctl
eksctl
is a simple CLI tool for creating EKS clusters. eksctl
uses AWS's native Cloudformation product under the hood, and will create a basic EKS cluster with a single command.
2. Install eksctl
On Linux:
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
On MacOSX
$ brew tap weaveworks/tap
$ brew install weaveworks/tap/eksctl
On Windows using chocolatey
$ chocolatey install eksctl
3. Create your kubernetes cluster
eksctl create cluster --name kube-example-blog --region eu-west-1 --fargate
This command will spin up an EKS cluste named kube-example-blog and generate a Fargate profile
for that cluster. It will also set your local kubectl
's context to the cluster it has just created.
Fargate Profiles
A fargate profile specifies which kubernetes pods should run on fargate, which subnets the pods should run in, and provides an IAM execution role that is used by the Kubernetes agent to download container images to the pod and perform other actions on our behalf.
Deploying Your Blog
After a few minutes, a working kubernetes cluster should be up and running in your AWS environment. You should be able to see the nodes
that Fargate has spun up using the command kubectl get nodes
Your output should look something like this:
fargate-ip-192-168-107-10.us-east-2.compute.internal Ready <none> 27h v1.14.8-eks
fargate-ip-192-168-108-227.us-east-2.compute.internal Ready <none> 27h v1.14.8-eks
fargate-ip-192-168-112-0.us-east-2.compute.internal Ready <none> 18h v1.14.8-eks
fargate-ip-192-168-151-157.us-east-2.compute.internal Ready <none> 26h v1.14.8-eks
fargate-ip-192-168-165-19.us-east-2.compute.internal Ready <none> 11d v1.14.8-eks
fargate-ip-192-168-165-32.us-east-2.compute.internal Ready <none> 27h v1.14.8-eks
fargate-ip-192-168-170-230.us-east-2.compute.internal Ready <none> 27h v1.14.8-eks
Now, we'll need to deploy something to our cluster. We'll go ahead and deploy a sample version of this blog today.
4. Clone the repository found here
$ git clone git@github.com:nas887/kube_fargate_blog_example.git
5. cd
into the directory and build the docker image.
$ docker build -t kube-fargate-blog .
6. Test that the dockerfile has been built correctly.
$ docker run -p 8080:80 kube-example-blog:latest
This command will start the container on your machine and expose port 80 on the container to port 8080 on your machine. You should be able to see the website when you navigate to localhost:8080.
Tagging and Pushing the docker image to a remote repository
After you've successfully built the docker image and verfied that nginx is serving the index.html file from your public folder, it's time to tag and push the docker image to a remote repository.
For today's purposes, we'll go ahead and use AWS's Elastic Container Registry.
7. Create an Elastic Container Registry (ECR) Repository
- Log in to your AWS console and Navigate to Elastic Container Registry.
- Make sure that you're currently signed into the same AWS region that is configured in your local aws configuration.
- Click Get Started
- Name your repository
kube-example-blog
.
8. Push your docker image to the new repository.
Retrieve an access token for your new repository.
$ $(aws ecr get-login --no-include-email --region YOUR_AWS_REGION)
Build your docker image
$ docker build -t kube-example-blog .
Tag your docker image
$ docker tag kube-example-blog:latest XXXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/kube-example-blog:latest
Push your docker image
$ docker push XXXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/kube-example-blog:latest
Your image will now be available in your ECR repository.
Build your Kubernetes Infrastructure.
9. Create the files you need to deploy your docker image to Kubernetes.
Your kubernetes deployment will consist of a Service object, a Deployment object and an Ingress object.
Make a .deploy
folder
$ mkdir .deploy
cd
into your .deploy folder and create a deployment.yaml file
$ cd .deploy
$ touch deployment.yaml
Build The Service Object
Add the following yaml to your deployment.yaml file.
apiVersion: v1
kind: Service
metadata:
name: "kube-example-blog-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "kube-example-blog"
---
This yaml generates a service object that exposes port 80 to the world and targets port 80 on the kube-example-blog container.
Build The Deployment Object
Add the following yaml directly underneath the Service object.
apiVersion: apps/v1
kind: Deployment
metadata:
name: "kube-example-blog"
namespace: "default"
spec:
selector:
matchLabels:
app: "kube-example-blog"
replicas: 1
template:
metadata:
labels:
app: "kube-example-blog"
spec:
containers:
- image: 'XXXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/kube-example-blog:latest'
imagePullPolicy: Always
name: kube-example-blog
ports:
- containerPort: 80
---
This yaml sets up a kubernetes deployment object, pointing to the image that we created above, with a minimum of one running instance for the image.
Build The Ingress Object
Add the following yaml directly underneath the deployment object.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "kube-example-blog-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
labels:
app: kube-example-blog
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "kube-example-blog-service"
servicePort: 80
This yaml sets up a kubernetes ingress object which allows external services, typically HTTP to access services inside a kubernetes cluster. The yaml here, exposes port 80 to an application load balancer, (which we'll create in the following steps) and allows the application load balancer to access the kube-example-blog-service.
Setting up the application load balancer.
We're almost ready to deploy our example blog to our kube cluster on fargate. The last thing we need to do is set up an application load balancer in AWS which will expose our service to the public internet.
NOTE: An Application Load Balancer (ALB) isn't necessary for non-fargate powered kubernetes clusters. Kubernetes provides native mechanisms for exposing services to the public internet.
Tagging Subnets
When we ran the eksctl create cluster
command above, eksctl
generated a series of public and private subnets for our kubernetes cluster. The subnets were tagged with the following set of tags:
description | key | value |
---|---|---|
All subnets in your VPC should be tagged accordingly so that Kubernetes can discover them. | kubernetes.io/cluster/CLUSTER_NAME | shared |
Public subnets in your VPC should be tagged accordingly so that Kubernetes knows to use only those subnets for external load balancers. | kubernetes.io/role/elb | 1 |
Private subnets in your VPC should be tagged accordingly so that Kubernetes knows that it can use them for internal load balancers: | kubernetes.io/role/internal-elb | 1 |
The ALB ingress controller will use these tags to determine which subnets to associate itself to.
10. Create an IAM policy
Create an IAM policy called ALBIngressControllerIAMPolicy
for your worker node instance profile that allows the ALB Ingress Controller to make calls to AWS APIs on your behalf.
Create a file called alb-ingress-controller-iam-policy.json
Copy the following json to that file
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"acm:DescribeCertificate",
"acm:ListCertificates",
"acm:GetCertificate"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:DeleteSecurityGroup",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInternetGateways",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVpcs",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyNetworkInterfaceAttribute",
"ec2:RevokeSecurityGroupIngress"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateRule",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteRule",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeRules",
"elasticloadbalancing:DescribeSSLPolicies",
"elasticloadbalancing:DescribeTags",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:ModifyRule",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:RemoveListenerCertificates",
"elasticloadbalancing:RemoveTags",
"elasticloadbalancing:SetIpAddressType",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:SetWebACL"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole",
"iam:GetServerCertificate",
"iam:ListServerCertificates"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cognito-idp:DescribeUserPoolClient"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"waf-regional:GetWebACLForResource",
"waf-regional:GetWebACL",
"waf-regional:AssociateWebACL",
"waf-regional:DisassociateWebACL"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"tag:GetResources",
"tag:TagResources"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"waf:GetWebACL"
],
"Resource": "*"
}
]
Create the policy
$ aws iam create-policy \
--policy-name ALBIngressControllerIAMPolicy \
--policy-document file://alb-ingress-controller-iam-policy.json
Note the policy ARN that is returned.
Retrieve the IAM role name for your worker nodes.
$ kubectl -n kube-system describe configmap aws-auth
Output:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::XXXXXXXXXX:role/eksctl-alb-nodegroup-ng-b1f603c5-NodeInstanceRole-GKNS581EASPU
username: system:node:{{EC2PrivateDNSName}}
Events: <none>
Attach the ALBIngressControllerIAMPolicy IAM policy to each of the worker node IAM roles you identified.
$ aws iam attach-role-policy \
--policy-arn arn:aws:iam::XXXXXXXXX:policy/ALBIngressControllerIAMPolicy \
--role-name ROLE_NAME
Create a file for the service account, cluster role and cluster role binding that you will need for your alb-ingress-controller.
$ touch rbac-role.yaml
Copy the following json into the rbac-role.yaml
file you just created.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
Create an alb-ingress-controller.yaml
file
$ touch alb-ingress-controller.yaml
Copy the following yaml into the alb-ingress-controller.yaml
file.
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
# Limit the namespace where this ALB Ingress Controller deployment will
# resolve ingress resources. If left commented, all namespaces are used.
# - --watch-namespace=your-k8s-namespace
# Setting the ingress-class flag below ensures that only ingress resources with the
# annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may
# choose any class you'd like for this controller to respect.
- --ingress-class=alb
# REQUIRED
- --cluster-name=YOUR_CLUSTER_NAME
# AWS VPC ID this ingress controller will use to create AWS resources.
# If unspecified, it will be discovered from ec2metadata.
- --aws-vpc-id=YOUR_VPC_ID
# AWS region this ingress controller will operate in.
# If unspecified, it will be discovered from ec2metadata.
# List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region
- --aws-region=YOUR_AWS_REGION
# Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# - --aws-api-debug
# Maximum number of times to retry the aws calls.
# defaults to 10.
# - --aws-max-retries=10
env:
# AWS key id for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
- name: AWS_ACCESS_KEY_ID
value: KEYVALUE
# AWS key secret for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
- name: AWS_SECRET_ACCESS_KEY
value: SECRETVALUE
# Repository location of the ALB Ingress Controller.
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
serviceAccountName: alb-ingress-controller
Edit the cluster name, vpc id and region to the appropriate values.
Update the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values to values that you feel comfortable giving access to the EC2 API.
Note: You should not deploy an AWS_ACCESS_KEY_ID or SECRET_ACCESS_KEY for production workloads. Use something like kube2iam for workloads that you intend to deploy to production.
Deploy your rbac-role.yaml
and alb-ingress-controller.yaml
files
$ kubectl apply -f rbac-role.yaml
$ kubectl apply -f alb-ingress-controller.yaml
Deploy the blog service
$ kubectl apply -f .deploy/deployment.yaml
After a few minutes, verify that the ingress resource was created.
$ kubectl get ingress/kube-example-blog-ingress
Output
NAME HOSTS ADDRESS PORTS AGE
kube-example-blog-ingress * kube-example-blog-default-6fa0-XXXXXXXXXX.us-east-2.elb.amazonaws.com 80 24h
Open a browser and navigate to the Address URL and you should see the blog.
Conclusion
We've deployed a simple blog service to AWS EKS on Fargate. In the next few posts, I'll add some more configuration for logging, TLS encryption and some more information on how to make this configuration production ready using tools like kube2iam and terraform.
Let me know if you have any questions via twitter @kneelshah.