Skip to content

ALB Controller

Overview

ALB Controller is a controller that can manage Elastic Load Balancers for a Kubernetes cluster running in AWS. The controller was recently rebranded to the AWS Load Balancer Controller and satisfies Kubernetes Ingress resources by provisioning Application Load Balancers (ALB) or Service resources by provisioning Network Load Balancers (NLB). AWS Load Balancer Controller can operate at the application layer allowing users to intelligently route user requests through an ALB. Requests are routed to a defined target or target group when a user defined rule which is typically a URI AND hostname is matched. For deployments that operate at the service layer a NLB in IP mode can be used to route traffic by a set of rules as well. A default round robin algorithm is used to route requests across the specified targets. AWS Load Balancer Controllers' integration with AWS Certificate Manager allows users to associate SSL certificates stored in ACM to their Load Balancers. This helps with performance as TLS termination is handled by the Load Balancer and not within the application.


What Will You Do

In this exercise,

  • You will create an "AWS Load Balancer Controller" addon and use it in a custom cluster blueprint
  • You will then apply this cluster blueprint to a managed cluster

Important

This tutorial describes the steps to create and use a custom cluster blueprint using the Web Console. The entire workflow can also be fully automated and embedded into an automation pipeline.


Assumptions

  • You have already provisioned or imported one or more Kubernetes clusters using the controller.
  • You have defined and associated an IAM policy that allows for the management of AWS Load Balancer Controller resources.
  • You have created a cert-manager addon. Cert-manager addon can be installed by completing Step 1 through Step 6 from the cert-manager recipe. cert-manager

Step 1: Download AWS Load Balancer Controller yaml

Navigate to AWS Load Balancer controllers' official repository and download the k8s yaml file "v2_1_2_full.yaml" for the latest release.


Step 2: Customize Values

In this step, we will be editing "v2_1_2_full.yaml" file so that AWS Load Balancer controller's get created in the proper cluster. Change the cluster-name to match your cluster.

cluster-name=my-cluster-name

Step 3: Create AWS Load Balancer Controller Addon

  • Select "Addons" and "Create" a new Addon called "alb" by selecting the "+ New Add-On" button
  • Ensure that you select "k8s YAML" for type and select the namespace as "kube-system"
  • Provide the "v2_1_2_full.yaml" from the previous step and select "CREATE"
  • Click on "+ New Version"
  • Enter "v2.1.2" for the Version Name and "UPLOAD" the k8s Yaml File downloaded in step 2
  • Select "Save Changes"

Create AWS Load Balancer Controller Addon


Step 4: Create Blueprint

Now, we are ready to assemble a custom cluster blueprint using the addons.

  • Under Infrastructure, select "Blueprints"
  • Create a new blueprint and give it a name such as "alb-ingress"
  • Create a new version of the blueprint by selecting "+ New Version"
  • Enter a version name such as "1.0" and add the appropriate Add-Ons.
  • Ensure that you have the managed Ingress disabled

Once the blueprint is created, ensure you publish it and optionally provide a version so that it can be tracked.

Create Blueprint

Show Blueprint


Step 5: Apply Blueprint

Now, we are ready to apply this custom blueprint to a cluster.

  • Click on Options for the target Cluster in the Web Console
  • Select "Update Blueprint" and select the "alb-ingress" blueprint and appropriate version we created from the list

Update Blueprint

Click on "Save and Publish". This will start the deployment of the addons configured in the "alb-ingress" blueprint to the targeted cluster. The blueprint sync process can take a few minutes. Once complete, the cluster will display the current cluster blueprint details and whether the sync was successful or not. See illustrative example below.

Blueprint Update


Step 6: Verify Deployment

Users can optionally verify whether the correct resources have been created on the cluster.

  • Click on the Kubectl button on the cluster to open a virtual terminal
  • We will verify the pods in the "kube-system" namespace. You should see something like the example below.
kubectl get po -n kube-system

NAME                                           READY   STATUS    RESTARTS   AGE
aws-load-balancer-controller-cffbf7886-4m56m   1/1     Running   0          155m

Step 7: Create Workload

  • Copy the file below and save as alb-ingress.yaml
  • Change the namespace name, certificate-arn, and host to match your environment

---
apiVersion: v1
kind: Namespace
metadata:
  name: david-private-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: david-private-ns
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 5
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
      - image: alexwhen/docker-2048
        imagePullPolicy: Always
        name: app-2048
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: david-private-ns
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30020
  type: NodePort
  selector:
    app.kubernetes.io/name: app-2048
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  namespace: david-private-ns
  name: ingress-2048
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-1:679196758854:certificate/fcfbb8e1-52c5-4389-8e2f-891ed84e0029
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
  rules:
    - host: alb.dev.rafay-edge.net
      http:
        paths:
          - path: /*
            backend:
             serviceName: ssl-redirect
             servicePort: use-annotation
          - path: /*
            backend:
              serviceName: service-2048
              servicePort: 80
- Under Applications, select "Workloads" - Select "New Workload" and enter a Name, set Package Type to "k8s YAML", and select the Namespace Workload 1

  • Upload file kong-ingress.yaml Workload 2

  • Set Drift Action and Placement Policy Workload 3

  • Publish Workload Workload 4


Step 8: Verify Workload

  • Click on the Kubectl button on the cluster to open a virtual terminal and run the following kubectl command
kubectl get ingress -n david-private-ns

NAME           CLASS    HOSTS                    ADDRESS                                                                 PORTS   AGE
ingress-2048   <none>   alb.dev.rafay-edge.net   k8s-game2048-ingress2-13413a6f56-64092663.us-west-1.elb.amazonaws.com   80      3m43s

Recap

Congratulations! You have successfully created a custom cluster blueprint with the "alb-ingress" addon and applied to a cluster. You can now use this blueprint on as many clusters as you require.