Import
A typical organization needs to address four critical capabilities (automation, visibility, security and governance) for their Kubernetes clusters. These four capabilities are very broad and comprehensive. We will look at how organizations can address these requirements for Amazon EKS Anywhere (EKS-A).
This document is designed to be a Quick Start and focuses only on the foundational capabilities available in the Kubernetes Operations Platform.
# | Requirement | Capability |
---|---|---|
1 | Automation Ability to fully automate the process of bringing EKS-A clusters under management |
One step import of Amazon EKS-A clusters. Easily embed into existing automation frameworks and systems |
2 | Visibility Ability to visualize, monitor and manage the global fleet of Kubernetes clusters, applications and user activity |
Integrated Visibility and Monitoring |
3 | Security Only authorized users should have visibility and access to their EKS-A clusters |
Project based Multi Tenancy |
4 | Security Provide developers and operations personnel the ability to securely access EKS-A clusters operating in private infrastructure (behind firewalls) without needing a VPN or bastion/jump hosts |
Zero Trust Kubectl Access to EKS-A clusters via web based shell and CLI |
5 | Governance Ensure that all EKS-A clusters have the requirement cluster addons required |
Version controlled cluster blueprints |
Prerequisites¶
- You have access to an Org with Org Admin privileges. Sign up for a free trial if you don’t have access.
- You have already installed and deployed Amazon EKS Anywhere in a supported environment and configured and have kubectl access to the EKS Anywhere cluster
- Ensure that your cluster is healthy and has sufficient resources available to accommodate additional resources (k8s mgmt operator and default cluster blueprint).
Step 1: Configure RCTL¶
This step is a one-time task. In this step, you will download the RCTL CLI so that you can interact with your Org programmatically and embed all operations in your existing automation platform.
- Login into your Org and click on My Tools
- Download the RCTL CLI binary for your operating system and install it on a node from which you can perform Kubectl operations to your EKS-A cluster.
- Download the CLI config and initialize the RCTL CLI with the config file
./rctl config init <full path to config file>
Optionally, check if RCTL is properly configured and can interact with your Org. You should see an output similar to the example below.
./rctl get projects
NAME
defaultproject
Step 2: Create Project¶
In this step, you will create a new project in your Org for your EKS Anywhere cluster. A project allows you to organize and compartmentalize your infrastructure, user access and resources. Org Admins can create and manage multiple projects in their Organization.
- Create a new project called “eks-a” in your Org using the RCTL CLI. Note that you can also perform this operation using the Web Console or REST APIs or the Terraform provider.
./rctl create project eks-a
Now, login into your Org to see what this new Project looks like.
- You can control and manage which users or groups have access to this project and assign roles to them.
- You can also configure and mandate the use of Single Sign On (SSO) for user access.
Step 3: Import Cluster¶
In this step, you will import the cluster into the new “eks-a” project you created in the prior step.
- Make sure that you have switched RCTL’s context to the new “eks-a” project.
./rctl config set project eks-a
- Save the following into a file such as create_import.sh.
- Update the CLUSTER_NAME with the name you used for the EKS-A cluster. This script will import the EKS-A cluster into the project in a single step making it well suited for embedding into your automation platform.
CLUSTER_NAME="aws-demo-eks-a"
./rctl create cluster imported $CLUSTER_NAME -p eks-a > $CLUSTER_NAME-bootstrap.yaml
sleep 30
export KUBECONFIG="${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig"
kubectl apply -f $CLUSTER_NAME-bootstrap.yaml
- Execute the bash file. The cluster import process can also take a few minutes to complete.
bash create_import.sh
You can monitor progress/status by checking whether the required namespaces have been created and the state of k8s resources in these namespaces.
NAME STATUS AGE
capi-kubeadm-bootstrap-system Active 2d21h
capi-kubeadm-control-plane-system Active 2d21h
capi-system Active 2d21h
capi-webhook-system Active 2d21h
capv-system Active 2d21h
cert-manager Active 2d21h
default Active 2d21h
eksa-system Active 2d21h
kube-node-lease Active 2d21h
kube-public Active 2d21h
kube-system Active 2d21h
rafay-infra Active 2d19h
rafay-system Active 2d19h
Once this step is complete, you should be able to view the cluster in the web console.
Step 4: Visibility and Monitoring¶
In this step, you will use the integrated visibility and monitoring capabilities to visualize and explore your imported cluster.
- Click on your cluster name to view the Cluster Dashboard.
- You will be presented with an intuitive summary and trends of critical metrics: health, CPU utilization, memory utilization, storage utilization, number of nodes and their status, number of pods and their status.
Let us now view and interact with the Kubernetes resources on the remote cluster operating behind a firewall using the integrated k8s resources dashboard.
- Click on “Resources” and to get a live, interactive view of the k8s resources organized by namespace etc.
- Admins can also take action (view events, exec to container, view container logs, describe resource, delete pods) on resources in a contextual manner.
Step 5: Zero Trust Kubectl¶
The Kube API server is the brain of a Kubernetes cluster. Securing access to API server IS the first line of defense for any organization. Zero Trust security is a modern security model that requires strict identity verification for every user and device trying to access resources on a private network regardless of their location (i.e. inside/outside of the network perimeter).
In this step, you will use the integrated zero trust kubectl access capability to
- Securely perform kubectl operations on your EKS Anywhere cluster operating behind a firewall.
- Leverage service accounts which are automatically created just in time on the cluster and removed once the user’s session is over.
- Maintain and view an immutable audit trail of all kubectl activity
In the Web Console,
- Navigate to the “eks-a” project
- Click on the kubectl button next to your EKS Anywhere cluster.
This will open a browser based Kubectl shell establishing an interactive channel to the remote cluster over the zero trust access proxy. Type the following kubectl command to list the namespaces in the remote cluster.
kubectl get ns
You will experience a 1-2 second delay the first time you access the remote cluster. This is because the Controller is dynamically creating a service account “Just In Time (JIT)” on the remote cluster with the required RBAC to enforce access based on your identity in your organization. All subsequent kubectl commands will be performed with no latency. You can view the newly created service account using the following command.
kubectl get sa -n rafay-system
NAME SECRETS AGE
default 1 2d21h
demos-64rafay-46co 1 7s
ingress-nginx 1 2d21h
ingress-nginx-admission 1 2d21h
As you can see from the example above, the service account for the user “demos@rafay.co” was created 7 seconds back "just in time" when this user tried accessing the cluster using Kubectl. This service account will be automatically removed from the remote cluster after a configurable expiration period ensuring that there are no permanently provisioned service accounts and no dangling credentials that can result in a security compromise.
- Click on Home -> System -> Audit Logs -> Kubectl Logs to view the audit trail of kubectl activity performed by users on the remote EKS Anywhere cluster.
- This data can be optionally streamed to the organization’s SIEM such as Splunk for long term log retention, correlation and forensics.
Step 6: Standardization¶
Organizations operating a fleet of Kubernetes clusters need the ability to guarantee that their clusters are compliant and have a certain baseline set of required software components. The platform provides the means for admins to create, manage and share cluster blueprints to provide governance for this requirement. A cluster blueprint is a declarative model for a version controlled Kubernetes software stack that comprises cluster wide add ons such as a load balancer, ingress controller, logging, monitoring, security, etc.
- In the imported EKS Anywhere cluster from the prior step, we used the default cluster blueprint which comprises a number of managed, curated addons.
- New managed addons (services) are added to the platform on an ongoing basis.
- Admins can centrally create and manage version controlled custom blueprints and addons.
- These resources can be centrally managed and shared with downstream projects to implement standardization and enforce governance.
On the cluster card in the Web Console, notice that our EKS Anywhere cluster is using the “default” cluster blueprint. You can update the cluster blueprint for existing clusters anytime a change is necessary.
In our EKS Anywhere cluster, the default cluster blueprint has automatically deployed a number of critical software addons. Let us view them.
- Click on Infrastructure -> Blueprints -> Default Blueprints -> default.
- Now, click on the “i” icon to view additional details. As you can see, in our cluster, the default blueprint has deployed the Rafay k8s mgmt operator, Monitoring and Alerting addons, Log aggregation addons and an Ingress controller addon.
Troubleshooting¶
Note that the import process may not complete if sufficient storage resources are not available on the cluster for the k8s management operator and the resources required for the default cluster blueprint’s addons.
If you encounter an error with the message “The node was low on resource: ephemeral-storage”, this means that storage utilization has exceeded the default threshold for kubelet (85% in use).
As a best practice, ensure that your nodes have storage utilization < 70% before attempting the import process. You can verify current utilization by running the “df” command on each worker node. An example is shown below where storage utilization is reported as 48% which is significantly below the default kubelet threshold.
df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 797M 7.5M 789M 1% /run
/dev/sda1 25G 11G 13G 48% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
If you encounter this situation, you can either
- Increase the disk size or
- Clean up the disk by deleting unused container images and log files to bring storage utilization below the default kubelet threshold.