Skip to content

Part 2: Scale

This is Part 2 of a multi-part, self-paced quick start exercise that will focus on the use of the RCTL command line to scale an EKS cluster.


What Will You Do

In this guide, you will:

  • Scale a managed node group
  • Add a spot instance node group to the cluster
  • Remove a spot instance node group from the cluster

Assumptions

  • You have completed Part 1 of this series and provisioned an EKS cluster
  • You have downloaded the RCTL CLI
  • You have downloaded and initialized the CLI configuration

Note

The instructions describe the process using the RCTL CLI. The same steps can be performed using the web console.


Step 1: Scale Nodes

In this step, we will scale the number of nodes within the cluster. You can scale the number of nodes up or down, depending on your needs. In this example, we will scale the number of nodes down to 1.

Download the cluster config from the existing cluster

  • Go to Infrastructure -> Clusters. Click on the settings icon of the cluster and select "Download Cluster Config"
  • Update the downloaded specification file with the new number of desired nodes
managedNodeGroups:
  - name: ng-1
    desiredCapacity: 1

The updated YAML file will look like this:

kind: Cluster
metadata:
  labels:
    env: dev
    type: eks-workloads
  name: test-eks
  project: aws
spec:
  type: eks
  cloudprovider: dev-aws
  blueprint: default
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test-eks
  region: us-west-1
  tags:
    'demo': 'true'
managedNodeGroups:
  - name: ng-1
    instanceType: t3.large
    desiredCapacity: 1
  • Execute the following command to scale the number of nodes within the cluster node group

./rctl apply -f eks-test-config.yaml
Expected output (with a task id):

Cluster: test-eks
{
  "taskset_id": "72d3dkg",
  "operations": [
    {
      "operation": "NodegroupScaling",
      "resource_name": "ng-1",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

From the web console, we can see that the number of nodes in the node group have been scaled down to 1

Verify Node Count


Step 2: Add Node Group

In this step, we will add a spot instance node group to the cluster. We will modify the specification file that was applied in step 1.

  • Add the following node group configuration code to the previously applied cluster specification file
nodeGroups:
  - name: spot-ng-1
    minSize: 2
    maxSize: 4
    instancesDistribution:
      maxPrice: 0.03
      instanceTypes: ["t3.large"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
      spotInstancePools: 2

The fully updated cluster specification file including the newly added spot instance node group code will look like this:

kind: Cluster
metadata:
  labels:
    env: dev
    type: eks-workloads
  name: test-eks
  project: aws
spec:
  blueprint: default
  cloudprovider: dev-aws
  type: eks
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
metadata:
  name: test-eks
  region: us-west-1
  tags:
    demo: "true"
managedNodeGroups:
- desiredCapacity: 1
  instanceType: t3.large
  name: ng-1
nodeGroups:
- name: spot-ng-1
  minSize: 2
  maxSize: 4
  instancesDistribution:
    maxPrice: 0.03
    instanceTypes: ["t3.large"]
    onDemandBaseCapacity: 0
    onDemandPercentageAboveBaseCapacity: 50
    spotInstancePools: 2
  • Execute the following command to create the spot instance node group

./rctl apply -f eks-test-config.yaml
Expected output (with a task id):

Cluster: test-eks
{
  "taskset_id": "g29j3m0",
  "operations": [
    {
      "operation": "NodegroupCreation",
      "resource_name": "spot-ng-1",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

From the web console, we can see that the new node group is being created. This could take up to 15 minutes to complete.

Verify Node Count

Monitor the web console until the node group has been created

Verify Node Count


Step 3: Remove Node Group

In this step, we will remove the spot instance node group from the cluster. We will modify the specification file that was applied in step 2. We will simply remove the code section that was added in step 2 to remove the node group.

  • Remove the following node group configuration code from the previously applied cluster specification file
nodeGroups:
  - name: spot-ng-1
    minSize: 2
    maxSize: 4
    instancesDistribution:
      maxPrice: 0.03
      instanceTypes: ["t3.large"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
      spotInstancePools: 2

The updated cluster specification file with the removed spot instance node group code will look like this:

kind: Cluster
metadata:
  labels:
    env: dev
    type: eks-workloads
  name: test-eks
  project: aws
spec:
  blueprint: default
  cloudprovider: dev-aws
  type: eks
---
apiVersion: rafay.io/v1alpha5
kind: ClusterConfig
managedNodeGroups:
- desiredCapacity: 1
  instanceType: t3.large
  name: ng-1
metadata:
  name: test-eks
  region: us-west-1
  tags:
    demo: "true"
  • Execute the following command to remove the spot instance node group

./rctl apply -f eks-test-config.yaml
Expected output (with a task id):

Cluster: test-eks
{
  "taskset_id": "gkj60m0",
  "operations": [
    {
      "operation": "NodegroupDeletion",
      "resource_name": "spot-ng-1",
      "status": "PROVISION_TASK_STATUS_PENDING"
    }
  ],
  "comments": "The status of the operations can be fetched using taskset_id",
  "status": "PROVISION_TASKSET_STATUS_PENDING"
}

From the web console, we can see that the new node group is being removed

Verify Node Count

Monitor the web console until the node group has been removed. You will only see one node group remaining.


Recap

Congratulations! At this point, you have

  • Successfully scaled a managed node group to include the desired number of nodes
  • Successfully added a spot instance node group to the cluster to take advantage of discounted compute resources
  • Successfully removed a spot instance node group from the cluster