Overview
For purposes of automation, it is strongly recommended that users create and manage version controlled "cluster specs" to provision and manage the lifecycle of clusters. This is well suited for scenarios where the cluster lifecycle (creation etc) needs to be embedded into a larger workflow where reproducible environments are required. For example:
- Jenkins or a CI system that needs to provision a cluster as part of a larger workflow
- Reproducible Infrastructure
- Ephemeral clusters for QA/Testing
Credentials¶
Ensure you have created valid cloud credentials for the controller to manage the lifecycle of Amazon EKS clusters on your behalf in your AWS account.
Automation Pipelines¶
The RCTL CLI can be easily embedded and integrated into your preferred platform for automation pipelines. Here is an example of a Jenkins based pipeline that uses RCTL to provision an Amazon EKS Cluster based on the provided cluster specification.
Examples¶
Multiple ready to use examples of cluster specifications are maintained and provided in this Public Git Repo.
Create Cluster¶
Imperative¶
Create an EKS cluster object in the configured project in the Controller. You can optionally also specify the cluster blueprint during this step. To create an EKS cluster, region and cloud credentials name are mandatory. If not specified, the default cluster blueprint will be used.
./rctl create cluster eks eks-cluster sample-credentials --region us-west-2
To create an EKS cluster with a custom blueprint
./rctl create cluster eks eks-cluster sample-credentials --region us-west-2 -b standard-blueprint
Declarative¶
You can also create an EKS cluster based on a version controlled cluster spec that you can manage in a Git repository. This enables users to develop automation for reproducible infrastructure.
./rctl create cluster eks -f cluster-spec.yml
An illustrative example of the cluster spec YAML file for EKS is shown below
kind: Cluster
metadata:
# cluster labels
labels:
env: dev
type: eks-workloads
name: eks-cluster
project: defaultproject
spec:
type: eks
blueprint: default
cloudprovider: dev-credential # Name of the cloud credential object created on the Controller
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-cluster
region: us-west-1
version: "1.18"
tags:
'demo': 'true'
nodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 2
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.medium
maxSize: 2
minSize: 0
name: ng-default-new
volumeSize: 80
volumeType: gp3
- amiFamily: AmazonLinux2
availabilityZones:
- us-west-2-wl1-phx-wlz-1
desiredCapacity: 1
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.medium
maxSize: 2
minSize: 1
name: ng-second-wlz-wed-1
privateNetworking: true
subnetCidr: 192.168.213.0/24
volumeSize: 80
volumeType: gp2
vpc:
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: true
publicAccess: false
nat:
gateway: Single
Cluster Upgrade¶
Use the below command to upgrade a cluster
./rctl upgrade cluster <cluster-Name> --version <version>
Below is an example of cluster version upgrade
./rctl upgrade cluster eks-cluster --version 1.20
Get Cluster Details¶
Once the cluster has been created, use this command to retrieve details about the cluster.
./rctl get cluster cluster-name
An example for a successfully provisioned and operational cluster is shown below.
+--------------+-----------------------------+---------+--------+-----------+----------------------------+----------+
| NAME | CREATED AT | TYPE | STATUS | BLUEPRINT | PROVISION | COMMENTS |
+--------------+-----------------------------+---------+--------+-----------+----------------------------+----------+
| cluster-name | 2021-02-20T00:05:10.425154Z | aws-eks | READY | default | CLUSTER_PROVISION_COMPLETE | |
+--------------+-----------------------------+---------+--------+-----------+----------------------------+----------+
Download Cluster Configuration¶
Once the cluster is provisioned either using Web Console or CLI, cluster configuration can be downloaded and stored it in a code repository.
./rctl get cluster config cluster-name
The above command will output the cluster config onto stdout. It can be redirected to a file and stored in the code repository of your choice.
./rctl get cluster config cluster-name > cluster-name-config.yaml
Important
Download the cluster configuration only after the cluster is completely provisioned.
Node Groups¶
Both Managed and Self Managed node groups are supported.
Add Node Groups¶
You can add a new node group (Spot or On-Demand) to an existing EKS cluster. Here is an example YAML file to add a spot node group to an existing EKS cluster.
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-cluster
region: us-west-1
nodeGroups:
- name: spot-ng-1
minSize: 2
maxSize: 4
volumeType: gp3
instancesDistribution:
maxPrice: 0.030
instanceTypes: ["t3.large","t2.large"]
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 0
spotInstancePools: 2
To add a spot node group to an existing cluster based on the config shown above.
./rctl create node-group -f eks-nodegroup.yaml
Scale Node Group¶
To Scale an existing node group in a cluster
./rctl scale node-group nodegroup-name cluster-name --desired-nodes <node-count>
Drain Node Group¶
To drain a node group in a cluster
./rctl drain node-group nodegroup-name cluster-name
Node group Labels and Tags¶
Post Cluster provisioning, users can update Managed Node group Labels and Tags via RCTL
Update Node group labels
To update the node group labels in a cluster
./rctl update nodegroup <node-group-name> <cluster-name> --labels 'k1=v1,k2=v2,k3=v3'
Update Node group tags
To update the node group labels in a cluster
./rctl update nodegroup <node-group-name> <cluster-name> --tags 'k1=v1,k2=v2,k3='
Delete Node Group¶
To delete a node group from an existing cluster
./rctl delete node-group nodegroup-name cluster-name
Node Groups in Wavelength Zone¶
Users can also create a Node Group in Wavelength Zone using the below config file
Manual Network Configuration
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
name: demo-eks-testing
region: us-west-1
nodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 4
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.xlarge
maxSize: 4
minSize: 0
name: ng-2220fc4d
volumeSize: 80
volumeType: gp3
- amiFamily: AmazonLinux2
availabilityZones:
- us-east-1-wl1-atl-wlz-1
desiredCapacity: 2
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.xlarge
maxSize: 2
minSize: 2
name: demo-wlzone1
privateNetworking: true
securityGroups:
attachIDs:
- test-grpid
subnets:
- 701d1419
volumeSize: 80
volumeType: gp2
Automatic Network Configuration
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
name: demo-eks-autonode
region: us-west-2
nodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 4
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.xlarge
maxSize: 4
minSize: 0
name: ng-2220fc4d
volumeSize: 80
volumeType: gp3
- amiFamily: AmazonLinux2
availabilityZones:
- us-west-2-wl1-phx-wlz-1
desiredCapacity: 2
iam:
withAddonPolicies:
autoScaler: true
imageBuilder: true
instanceType: t3.xlarge
maxSize: 2
minSize: 2
name: ng-rctl-4-new
privateNetworking: true
subnetCidr: 10.51.0.0/20
volumeSize: 80
volumeType: gp2
Node Group in Wavelength Zone¶
To create a node group configuration, use the below command
./rctl create -f nodegroup.yaml
Users who wish to perform the required changes in the Cluster config file must use the below command to create Wavelength Zone Node Group in the cluster
./rctl apply -f <configfile.yaml>
Example:
./rctl apply -f newng.yaml
Output:
{
"taskset_id": "1ky4gkz",
"operations": [
{
"operation": "NodegroupCreation",
"resource_name": "ng-ui-new-ns",
"status": "PROVISION_TASK_STATUS_PENDING"
},
{
"operation": "NodegroupCreation",
"resource_name": "ng-wlz-ui-222",
"status": "PROVISION_TASK_STATUS_PENDING"
},
{
"operation": "NodegroupCreation",
"resource_name": "ng-default-thurs-222",
"status": "PROVISION_TASK_STATUS_PENDING"
},
{
"operation": "ClusterCreation",
"resource_name": "rajat-rctl-friday-3",
"status": "PROVISION_TASK_STATUS_PENDING"
}
],
"comments": "The status of the operations can be fetched using taskset_id",
"status": "PROVISION_TASKSET_STATUS_PENDING"
}
Delete Node Group¶
Delete the required Node Group from the config file and use the below command to apply the deletion change
./rctl apply -f <configfile.yaml>
Example:
./rctl apply -f <newng.yaml>
Output:
{
"taskset_id": "6kno42l",
"operations": [
{
"operation": "NodegroupDeletion",
"resource_name": "ng-wlz-thurs",
"status": "PROVISION_TASK_STATUS_PENDING"
}
],
"comments": "The status of the operations can be fetched using taskset_id",
"status": "PROVISION_TASKSET_STATUS_PENDING"
}
Delete Cluster¶
This will delete the EKS cluster and all associated resources in AWS.
./rctl delete cluster eks-cluster