Skip to content

Clusters

Clusters and workloads are deployed in the Customer's Org in the context of a Project. Users can use RCTL to fully automate the lifecycle management of clusters. Specifically, the operations listed in the table below can be fully automated using RCTL.

Resource Create Get Update Delete
Cluster YES YES YES YES

Create Cluster

Imperative

Use this command to create a cluster object in the configured project in your Organization. You can optionally also specify the cluster blueprint during this step.

./rctl create cluster imported qa-cluster -l sanjose
./rctl create cluster imported prod-cluster2 -l sanjose -b prodblueprint

Declarative

You can also import a cluster into the Project based on a version controlled cluster spec that you can store in a Git repository. This enables users to develop automation for reproducible infrastructure.

./rctl create cluster -f cluster-spec.yml

An illustrative example of the cluster spec YAML file is shown below

kind: Cluster
metadata:
  # set the name of the cluster
  name: demo-imported-cluster-01
  # specific the project name to create the cluster
  project: defaultproject
  # cluster labels
  labels:
    env: dev
    type: ml-workloads
spec:
  # type can be "imported"
  type: imported
  # location, can be custom or predefined
  location: aws/eu-central-1
  # blueprint below is optional, if not specified, default value is "default"
  blueprint: default
  # blueprintversion below is optional, if not specified, latest version in the blueprint will be used"
  blueprintversion: v1

Unified/Split YAML

Currently, both unified and split yaml specs are supported to create cluster(s) via RCTL

  • Unified YAML

Below is an example of cluster unified yaml spec

apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  name: demo-cluster
  project: default
spec:
  blueprintConfig:
    name: demo-bp
    version: v1
  cloudCredentials: demo_aws
  config:
    managedNodeGroups:
    - amiFamily: AmazonLinux2
      desiredCapacity: 1
      iam:
        withAddonPolicies:
          autoScaler: true
      instanceType: t3.xlarge
      maxSize: 2
      minSize: 0
      name: managed-ng-1
      version: "1.22"
      volumeSize: 80
      volumeType: gp3
    metadata:
      name: demo-cluster
      region: us-west-2
      version: "1.22"
    network:
      cni:
        name: aws-cni
        params:
          customCniCrdSpec:
            us-west-2a:
            - securityGroups:
              - sg-09706d2348936a2b1
              subnet: subnet-0f854d90d85509df9
            us-west-2b:
            - securityGroups:
              - sg-09706d2348936a2b1
              subnet: subnet-0301d84c8b9f82fd1
    vpc:
      clusterEndpoints:
        privateAccess: false
        publicAccess: true
      nat:
        gateway: Single
      subnets:
        private:
          subnet-06e99eb57fcf4f117:
            id: subnet-06e99eb57fcf4f117
          subnet-0509b963a387f7fc7:
            id: subnet-0509b963a387f7fc7
        public:
          subnet-056b49f76124e37ec:
            id: subnet-056b49f76124e37ec
          subnet-0e8e6d17f6cb05b29:
            id: subnet-0e8e6d17f6cb05b29
  proxyConfig: {}
  type: aws-eks
  • Split YAML

Below is an example of cluster split yaml spec

kind: Cluster
metadata:
  name: demo-cluster
  project: defaultproject
spec:
  blueprint: default
  cloudprovider: demo-aws
  cniprovider: aws-cni
  proxyconfig: {}
  type: eks
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
managedNodeGroups:
- amiFamily: AmazonLinux2
  desiredCapacity: 3
  iam:
    withAddonPolicies:
      autoScaler: true
  instanceType: t3.xlarge
  labels:
    app: infra
    dedicated: "true"
  maxSize: 3
  minSize: 0
  name: ng-f813b069
  version: "1.22"
  volumeSize: 80
  volumeType: gp3
metadata:
  name: demo-cluster
  region: us-west-2
  version: "1.22"
vpc:
  cidr: 192.168.0.0/16
  clusterEndpoints:
    privateAccess: true
    publicAccess: true
  nat:
    gateway: Single

List Clusters

Use this command to retrieve the list of clusters available in the configured project. In the example shown below, there are four clusters in this project.

./rctl get cluster

+--------------------------------+----------+
|              NAME              |   TYPE   |
+--------------------------------+----------+
| rafaypoc-eks-existing-vpc-cicd | aws-eks  |
| demo-spot-eks                  | aws-eks  |
| demo-vmware-sjc                | manual   |
| demo-aks-east                  | imported |
+--------------------------------+----------+

Get Cluster Info

Use this command to retrieve the a specific cluster available in the configured project.

./rctl get cluster <cluster-name>

Below is the illustrative example of the "demo-spot-eks" cluster information of the current project:

./rctl get cluster demo-spot-eks

+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
|     NAME      |         CREATED AT          |         MODIFIED AT         |  TYPE   | STATUS |   BLUEPRINT   |
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
| demo-spot-eks | 2020-08-11T16:54:25.750659Z | 2020-09-23T04:05:00.720032Z | aws-eks | READY  | eks-blueprint |
+---------------+-----------------------------+-----------------------------+---------+--------+---------------+
Or you can use below command to get more cluster information in json or yaml format

./rctl get cluster <cluster-name> -o json
./rctl get cluster <cluster-name> -o yaml


Delete Cluster

Authorized users can automate the deletion of an existing cluster in the configured project using RCTL.

./rctl delete cluster <cluster-name>

Update Cluster Blueprint

Use this command to use RCTL to update the cluster blueprint associated with a given cluster.

./rctl update cluster <cluster-name> -blueprint <blueprint-name>

Download Kubeconfig

Users can use RCTL to download the Kubeconfig for clusters in the configured project. All access will be performed via the Controller's Zero Trust Kubectl access proxy.

./rctl download kubeconfig [flags]

By default, a unified Kubeconfig for all clusters in the project is downloaded. If required, users can download the Kubeconfig for a selected cluster.

./rctl download kubeconfig --cluster <cluster-name>

Wait Flag

RCTL provides an option for the users to wait and block the long-running operations. When an automation pipeline logic becomes extremely convoluted, enabling the --wait flag helps to block and keep pulling the cluster ready status.

Supported Operations

Resource Create Upgrade Delete
Cluster (AKS, EKS and MKS) YES YES YES
Nodegroup (AKS and EKS) YES YES YES

Below is an example with a wait flag that provides an option to wait until the EKS cluster is in ready status

./rctl create cluster eks eks-cluster demo-credential --region us-west-2 --node-ami-family AmazonLinux2  --wait