KOP EKSA Clusters - CLI
Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables enterprises to easily create and manage Kubernetes clusters on premises, both on virtual machines (VMs) and bare metal servers. There is an integration to ensure that users can provision EKS-A Clusters in any region using the RCTL CLI.
Resource | Create | Get | Delete |
---|---|---|---|
Gateway | YES | YES | YES |
Cluster | YES | YES | YES |
Template Config | YES | NO | NO |
Gateway¶
Create Gateway¶
Use the below command to create a Gateway
./rctl create gateway <gw-name> --gatewaytype eksaBareMetal -p <project-name>
Get Gateway¶
To retrieve the list of Gateway(s), use the below command
./rctl get gateway
+-----------+------------------------+---------------+------------------------------+
| GATEWAYID | NAME | GATEWAY TYPE | CREATED AT |
+-----------+------------------------+---------------+------------------------------+
| d27x0k4 | phani-qc-path-gateway | vmware | Thu Sep 29 11:07:34 UTC 2022 |
+-----------+------------------------+---------------+------------------------------+
| 4qkolkn | demo-gateway | vmware | Tue Jul 19 09:38:02 UTC 2022 |
+-----------+------------------------+---------------+------------------------------+
| 6kno42l | phani-qc-creds-gw | vmware | Sat Oct 8 07:17:35 UTC 2022 |
+-----------+------------------------+---------------+------------------------------+
| q72dg2g | gateway2 | vmware | Thu Jul 21 11:49:49 UTC 2022 |
+-----------+------------------------+---------------+------------------------------+
| x3mxvkr | asdad | vmware | Mon Jul 25 08:04:05 UTC 2022 |
+-----------+------------------------+---------------+------------------------------+
| 1ky4gkz | phani-newbin-qc-gw | vmware | Sat Oct 8 17:02:57 UTC 2022 |
+-----------+------------------------+---------------+------------------------------+
To retrieve a specific gateway, use the below command
./rctl get gateway <gw-name>
Example:
./rctl get gateway demo-gateway
+-----------+------------------+---------------+------------------------------+
| GATEWAYID | NAME | GATEWAY TYPE | CREATED AT |
+-----------+------------------+---------------+------------------------------+
| 72dzgmg | demo-gateway | eksaBareMetal | Mon Jan 23 10:10:27 UTC 2023 |
+-----------+------------------+---------------+------------------------------+
To view a specific gateway config details, use the below command
./rctl get gateway demo-gateway --configdetails
{"agentID":"qkolqjk",
"agentName":"agent-gw-demo-gateway",
"bootstrapRepoUrl":"https://qc-repo.stage.rafay-edge.net/repository/eks-bootstrap/v1/",
"gatewayID":"72dzgmg",
"gatewayName":"demo-gateway",
"gatewayType":"eksaBareMetal",
"relays":[{"addr":"qc-app.stage.rafay.dev:443",
"name":"rafay-core-infra-relay-agent",
"token":"cf75q4rk6i7sa680vp1g"}],
"setupCommand":"wget -q -O infra-gateway-installer-linux-amd64.tar.bz2 https://qc-petti.stage.rafay-edge.net/publish/infra-gateway-installer-linux-amd64.tar.bz2 && tar -xjf infra-gateway-installer-linux-amd64.tar.bz2 && echo 'eyJhZ2VudElEIjoicWtvbHFqayIsIm1heERpYWxzIjoiMiIsInJlbGF5cyI6W3siYWRkciI6InFjLWFwcC5zdGFnZS5yYWZheS5kZXY6NDQzIiwiZW5kcG9pbnQiOiIqLnFjLWNvbm5lY3Rvci5pbmZyYXJlbGF5LnN0YWdlLnJhZmF5LmRldjo0NDMiLCJuYW1lIjoicmFmYXktY29yZS1pbmZyYS1yZWxheS1hZ2VudCIsInRlbXBsYXRlVG9rZW4iOiJjYXY5aDYyZmdqdXE3MzJkcTZwMCIsInRva2VuIjoiY2Y3NXE0cms2aTdzYTY4MHZwMWcifV19' | base64 -d > ./relayConfigData.json && ./infra-gateway-installer --configFile=./relayConfigData.json --bootstrapUrl=https://qc-repo.stage.rafay-edge.net/repository/eks-bootstrap/v1/"}
Delete Gateway¶
Use the below command to delete a Gateway
./rctl delete gateway <gw-name> -p <project-name>
EKSA Cluster¶
Create Cluster¶
./rctl apply -f <cluster-config.yaml> -p <project-name>
An illustrative example of the cluster spec YAML file for EKSA is shown below
apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
name: demoeksabm
project: demo
spec:
blueprint:
name: minimal
version: latest
config:
eksaClusterConfig:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: demoeksabm
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
endpoint:
host: 1.x.x.x
machineGroupRef:
kind: TinkerbellMachineConfig
name: machineconfigcp
datacenterRef:
kind: TinkerbellDatacenterConfig
name: demoeksabm
kubernetesVersion: "1.24"
managementCluster:
name: demoeksabm
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: TinkerbellMachineConfig
name: machineconfigworkernode
name: workerng
tinkerbellDatacenterConfig:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
metadata:
name: demoeksabm
spec:
tinkerbellIP: 1.x.x.x
tinkerbellHardwareConfig:
- disk: /dev/sda
gateway: 1.x.x.x
hostname: eksa-apdemoeksabm-node-cp-001
ip_address: 1.x.x.x
labels: type=controlplane
mac: x:x:x:86:ee:5e
nameservers: 8.8.8.8
netmask: x.x.x.x
- disk: /dev/sda
gateway: 1.x.x.x
hostname: eksa-apdemoeksabm-node-dp-001
ip_address: 1.x.x.x
labels: type=workernode
mac: x:x:x:x:94:9e
nameservers: 8.8.8.8
netmask: x.x.255.240
tinkerbellMachineConfig:
- apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: machineconfigcp
spec:
hardwareSelector:
type: controlplane
osFamily: bottlerocket
templateRef:
kind: TinkerbellTemplateConfig
name: templateconfigcp
users:
- name: ubuntu
sshAuthorizedKeys:
- ssh-rsa AAAAB3sssNzaC1ssyc2EAAAADAQABAAABAQDIFohJ3sN0Qkap0ts/FjXm8PDr/d4O7RuAJfdhJy9YtC3Nck6r7wPaN443Ty7fzIZ18vqM77Ll4gxLlC0cv6sssdsKWlsss6MSEssRsds1Y6Ysdss0TmQBAO2pnJEssLvClSY9nTSQ8qIwXfhI+IiLdscUWeeP70s9/QE6ASqC2/C1jhHu4RD08MFT5OLH53iNll5DKsVz9Ojoxgsdds+WcPOvdhKssfD0VssnH5CZsssEKjdmYiyssQsds7R8vLNmn7NCqfosuZDbQuENKtRFa5H2qn4b84VuBk9hFyTE9DeyM29uJDWjBft7Lsna6+TvLD1Ni+l1Q5C4H5ssJiud7EynYhUrY+4Hzj4xhpQEO3oowIuJExLTXNh
ubuntu@testbox
- apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
name: machineconfigworkernode
spec:
hardwareSelector:
type: workernode
osFamily: bottlerocket
templateRef:
kind: TinkerbellTemplateConfig
name: tempalteconfigworkernode
users:
- name: ubuntu
sshAuthorizedKeys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAsssBAssQDIFohJ3sN0Qkap0ts/FjXm8PDr/d4O7RuAJfdhJy9YtC3Nck6r7wPaN443Ty7fzIZ18vqM77Ll4gxLlC0cv6KWl6MSER1Y6Yv0TmQBAO2pnsdsdJELvClSY9nTSQ8qIwXfhI+IiLdsdsdscUWeeP70s9/QE6ASqC2/C1jhHu4RD08MFT5OLH53iNll5DKsVz9Ojoxg+WcPOvdhKfD0VnH5CZsEKjdmYiyQ7R8vLNmn7NCqfosuZDbQuENKtRFa5H2qn4b84VuBk9hFyTE9DeyM29uJDWjBft7Lsna6+TvLD1Ni+l1Q5C4H5Jiud7EynYhUrY+4Hzj4xhpQEO3oowIussJExLTXNh
ubuntu@testbox
tinkerbellTemplateConfig:
- apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
name: templateconfigcp
spec: |
template:
global_timeout: 6000
id: ""
name: templateconfigcp
tasks:
- actions:
- environment:
COMPRESSED: "true"
DEST_DISK: /dev/sda
IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/27/artifacts/raw/1-24/bottlerocket-v1.24.9-eks-d-1-24-7-eks-a-27-amd64.img.gz
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: stream-image
timeout: 600
- environment:
BOOTCONFIG_CONTENTS: |
kernel {}
DEST_DISK: /dev/sda12
DEST_PATH: /bootconfig.data
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: write-bootconfig
pid: host
timeout: 90
- environment:
DEST_DISK: /dev/sda12
DEST_PATH: /user-data.toml
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
HEGEL_URLS: http://1.x.x.x:50061,http://1.x.x.x:50061
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: write-user-data
pid: host
timeout: 90
- environment:
DEST_DISK: /dev/sda12
DEST_PATH: /net.toml
DIRMODE: "0755"
FS_TYPE: ext4
GID: "0"
IFNAME: enp0s3
MODE: "0644"
STATIC_BOTTLEROCKET: "true"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: write-netplan
pid: host
timeout: 90
- image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: reboot-image
pid: host
timeout: 90
volumes:
- /worker:/worker
name: cptemplate
volumes:
- /dev:/dev
- /dev/console:/dev/console
- /lib/firmware:/lib/firmware:ro
worker: '{{.device_1}}'
version: "0.1"
- apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
name: tempalteconfigworkernode
spec: |
template:
global_timeout: 6000
id: ""
name: tempalteconfigworkernode
tasks:
- actions:
- environment:
COMPRESSED: "true"
DEST_DISK: /dev/sda
IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/27/artifacts/raw/1-24/bottlerocket-v1.24.9-eks-d-1-24-7-eks-a-27-amd64.img.gz
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: stream-image
timeout: 600
- environment:
BOOTCONFIG_CONTENTS: |
kernel {}
DEST_DISK: /dev/sda12
DEST_PATH: /bootconfig.data
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: write-bootconfig
pid: host
timeout: 90
- environment:
DEST_DISK: /dev/sda12
DEST_PATH: /user-data.toml
DIRMODE: "0700"
FS_TYPE: ext4
GID: "0"
HEGEL_URLS: http://1.x.x.x:50061,http://1.x.x.x:50061
MODE: "0644"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: write-user-data
pid: host
timeout: 90
- environment:
DEST_DISK: /dev/sda12
DEST_PATH: /net.toml
DIRMODE: "0755"
FS_TYPE: ext4
GID: "0"
IFNAME: enp0s3
MODE: "0644"
STATIC_BOTTLEROCKET: "true"
UID: "0"
image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: write-netplan
pid: host
timeout: 90
- image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-27
name: reboot-image
pid: host
timeout: 90
volumes:
- /worker:/worker
name: dptemplate
volumes:
- /dev:/dev
- /dev/console:/dev/console
- /lib/firmware:/lib/firmware:ro
worker: '{{.device_1}}'
version: "0.1"
type: Eksa_bm
Get Cluster¶
Use the below command to retrieve an EKSA Cluster details
./rctl get cluster <cluster-name>
Below is an example of cluster info on applying the get command
./rctl get cluster demoeksabm
OWNERSHIP: self
DETAILS:
+------------+-----------------------------+---------+--------+-----------+---------------------------------------+
| NAME | CREATED AT | TYPE | STATUS | BLUEPRINT | PROVISION STATUS |
+------------+-----------------------------+---------+--------+-----------+---------------------------------------+
| demoeksabm | 2023-01-23T11:42:13.925222Z | Eksa_bm | READY | minimal | {Type:ClusterInitialized |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 11:47:56.168923596 +0000 UTC |
| | | | | | Reason:cluster initialized }, |
| | | | | | {Type:ClusterBootstrapNodeInitialized |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 11:47:56.494079043 +0000 UTC |
| | | | | | Reason:Gateway is reachable and |
| | | | | | cluster folder is created on admin |
| | | | | | node }, {Type:ClusterDeleted |
| | | | | | Status:NotSet LastUpdated:2023-01-23 |
| | | | | | 09:50:35.146806336 +0000 |
| | | | | | UTC Reason:pending }, |
| | | | | | {Type:ClusterEKSCTLInstalled |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 11:48:00.09296631 +0000 UTC |
| | | | | | Reason:eksctl installed }, |
| | | | | | {Type:ClusterHardwareCSVCreated |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 11:48:01.135319653 +0000 UTC |
| | | | | | Reason:cluster hardware csv is |
| | | | | | created }, {Type:ClusterConfigCreated |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 11:48:08.353848991 +0000 UTC |
| | | | | | Reason:cluster config is created |
| | | | | | }, {Type:ClusterSpecApplied |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 12:15:38.290877192 +0000 UTC |
| | | | | | Reason:cluster spec is applied |
| | | | | | }, {Type:ClusterControlPlaneReady |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 12:15:38.663067102 +0000 UTC |
| | | | | | Reason:Cluster is ready }, |
| | | | | | {Type:ClusterWorkerNodeGroupsReady |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 12:15:39.193672164 +0000 UTC |
| | | | | | Reason:cluster worker groups |
| | | | | | machines have started }, |
| | | | | | {Type:ClusterOperatorSpecApplied |
| | | | | | Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 12:15:41.684295187 +0000 UTC |
| | | | | | Reason:operator spec applied }, |
| | | | | | {Type:ClusterHealthy Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 12:20:44.974512308 +0000 UTC |
| | | | | | Reason:cluster is healthy }, |
| | | | | | {Type:ClusterActive Status:Success |
| | | | | | LastUpdated:2023-01-23 |
| | | | | | 12:20:44.97451282 +0000 UTC |
| | | | | | Reason:cluster is active }, |
+------------+-----------------------------+---------+--------+-----------+---------------------------------------+
Delete Cluster¶
To delete an EKSA cluster, use the below command
./rctl delete cluster <cluster-name> -p <project-name>