Skip to content



25 June, 2021

Upstream Kubernetes

New Kubernetes Versions

In addition to Kubernetes 1.18.x, 1.19.x and 1.20.x, customers can now provision clusters based on Kubernetes 1.21.x. Existing upstream Kubernetes clusters running older versions of Kubernetes can be seamlessly upgraded "in-place" to 1.21.x

k8s v1.21

Operating Systems

Customers can now provision upstream Kubernetes clusters on instances with Ubuntu 20.04 LTS operating system.

Ubuntu 20.04 LTS

Select Container Network Interface (CNI)

Users that wish to override the default CNI and select one from the list of supported CNIs can make this selection during cluster provisioning.

CNI Selection

IPv4/6 Dual Stack Support

Users can enable IPv4/IPv6 dual-stack networking on Kubernetes v1.21 or higher. This allows the simultaneous assignment of both IPv4 and IPv6 addresses to Pods and Services.

IPv6 Support

Amazon EKS

Graviton 2 ARM based nodes

Users can now provision and manage node groups based on AWS's Graviton 2 ARM based processors. Watch a video of this feature in action.

Forward Proxy

Organizations that are required to use a forward proxy for all outbound connections from their EKS clusters to the SaaS controller etc can now configure and enable this during cluster provisioning.

EKS Forward Proxy

Support for k8s 1.20

Users can now provision EKS clusters based on Kubernetes 1.20 and also seamlessly upgrade their existing fleet to Kubernetes 1.20.

Infra GitOps

Users can now upgrade their EKS clusters to a new version of Kubernetes by simply updating the cluster specification file in their Git repo.

Scale to Zero

Organizations can now "scale down" their EKS clusters to "zero" worker nodes when not in use to save costs. The worker nodes can be "scaled up" anytime.

Zero Trust Kubectl (ZTKA)

Users with "Read Only" roles are now blocked from being able to perform any operations (GET, LIST, VIEW) on Kubernetes secrets on remote clusters.

Workload Wizard

Users of the workload wizard can now specify the storage class for both "shared workload wide volumes" as well as "container volumes".

Workload Wide Shared volumes

Workload Wide Shared Volumes

Container Volumes

Container Volumes


Enhancements to RCTL CLI focused on automation.

Project Lifecycle

Users with Org Admin roles can now use the RCTL CLI to "Create", "Read", "Update" and "Delete" Projects in their Orgs

Credential Provider Lifecycle

Authorized users can now use the RCTL CLI to "Create", "Read", "Update" and "Delete" Cloud Credentials in their Projects

Resource Sharing

Authorized users can now use the RCTL CLI to "Share" resources in their projects with "selected" or "all" projects in the Org

Cluster Overrides

In addition to using the RCTL CLI, users can now use the Console to "manage" the lifecycle for cluster overrides for both "workloads" and "addons" in a cluster blueprint.

Cluster Overrides UI

Swagger APIs

The REST APIs have been enhanced to add support for repositories, CD agent, users, groups.


Webhook payload details for pipeline received as part of a trigger from an external Git repository are now available for users to view and analyze for troubleshooting purposes.

Webhook Payload

White Labeling

Enhancements to white labeling for partners and service providers

Product Docs URL

The product docs URL can now be white labeled ensuring that only their branding is presented to their end users/customers.

Bug Fixes

Bug ID Description
RC-10678 Shared volume in the workload wizard is not working as the PVC's access mode is set to RWO and multiple pods cannot mount them
RC-10621 Git repo based workload failed to deploy if the commit message has special character
RC-10485 SSO User login with remember me causes the login button to stuck after logout
RC-10371 When adding a new blueprint version, the addon version dropdown always shows only the current selected version
RC-10276 Failed to run conjurer in Ubuntu Server 18.04.5 in Bare metal node
RC-10272 Unknown cluster health count is not shown in the Org and Project dashboard cluster card
RC-10078 Staging: gitops job UI displayed issue when 1 job took a long time to finish
RC-9691 Infrastructure -> PSP -> When we exceed the name field length, no error is raised
RC-8407 Inconsistent data show in Rafay for cpu and memory resources of the node
RC-8350 Improve overall page loading performance for the Console

v1.5 Patch 2

21 May, 2021

Vault Namespaces

Organizations that use Hashicorp Vault Enterprise and HCP Vault that implement Vault as a "service" internally can use "vault namespaces" to provide tenant level isolation across teams/BU's or applications. The controller's integration with Hashicorp Vault now supports the use of vault namespaces.

Vault Namespace

v1.5 Patch 1

17 May, 2021

Bug Fixes

Bug ID Description
RC-9615 White Labeling fixes for Ingress configuration in workload wizard
RC-9616 White Labeling fixes for container configuration in workload wizard
RC-9617 White Labeling fixes for container auto scaling configuration in workload wizard
RC-9618 Description for Minimal blueprint needs to be corrected
RC-10272 Unknown cluster health count is not shown in the Org and Project dashboard cluster card


07 May, 2021

Home Page

When users login into the console, they are now presented with a new home page with navigation options to quickly navigate to the project(s) they want to access.

Login Home Page


Organization dashboard

Organization Administrators will have access to an organization wide dashboard that will provide a bird’s eye view of resources across all projects.

Org Dashboard

Project dashboard

Project admins will have access to a project wide dashboard providing a bird’s eye view into all resources in the project.

Project Dashboard

Upstream Kubernetes

Customizable Retry Thresholds

For cluster provisioning and node additions in remote edge environments with slow or unreliable network connectivity, administrators can now specify retry thresholds for initial cluster provisioning and addition of new nodes. This ensures that provisioning and node additions will keep retrying until it is successful or if the specified threshold is met.

New Kubernetes Versions

In addition to Kubernetes 1.17.x, 1.18.x and 1.19.x, customers can now provision clusters based on Kubernetes 1.20.x (based on the containerd cri).

k8s v1.20

Existing upstream Kubernetes clusters can be seamlessly upgraded to 1.20.x

Upgrade to k8s v1.20

Custom Pod and Service Subnets

For scenarios where the customer's internal LAN subnet is the same as the default CIDR for the CNI, administrators can now specify a custom CIDR block for pod and service subnets during cluster provisioning.

Custom CIDR

Amazon EKS

Kubernetes Versions

Amazon EKS clusters can now be provisioned based on Kubernetes 1.19. Existing EKS clusters based on older versions of Kubernetes that are managed by the controller can be seamlessly upgraded to Kubernetes 1.19.

EKS k8s 1.19

Storage Classes

Worker nodes can now be provisioned with support for Amazon’s gp3 storage class

Spot Instances

Managed node groups can now be provisioned to use spot instances for significant cost savings.

Cloud Credentials

Administrators can quickly identify the cloud credentials associated with a managed EKS Cluster on the web console. They are also provided with an intuitive workflow to replace/switch cloud credentials after a cluster has been provisioned providing them with flexibility with ongoing operations.

EKS Cloud Credentials

Windows Node group Support

Administrators can now provision and manage self managed Windows node groups allowing them to deploy and operate Windows based containers on managed EKS clusters.

Windows Node Group

Advanced Customization

Administrators can also now optionally view, edit and perform advanced customization of the cluster’s configuration on the controller to provision a cluster or to add a new node group. They can also programmatically download and save the cluster specification of an active cluster in a version controlled Git repository. Examples for advanced customization options are available for Fargate profiles and user data for customization of ec2 based worker nodes.

# Usage: rctl create cluster eks -f ./test-eks-cluster.yaml
kind: Cluster
    env: dev
    type: eks-workloads
  name: test-eks
  project: defaultproject
  type: eks
  cloudprovider: dev-aws
  blueprint: standard-blueprint
kind: ClusterConfig

  name: test-eks
  region: us-west-1
    'app': 'demo'
    'owner': 'myowner'

        id: subnet-xxxxxxxxxxxxxxxxx
        id: subnet-xxxxxxxxxxxxxxxxx
        id: subnet-xxxxxxxxxxxxxxxxx
        id: subnet-xxxxxxxxxxxxxxxxx
  serviceRoleARN: arn:aws:iam::xxxxxxxxxxxx:role/<IAM_ROLE_NAME>
  - name: nodegroup-4
    instanceType: t3.xlarge
    desiredCapacity: 1
    minSize: 1
    maxSize: 3
      instanceProfileARN: arn:aws:iam::xxxxxxxxxxxx:instance-profile/<IAM_INSTANCE_PROFILE_NAME>
      instanceRoleARN: arn:aws:iam::xxxxxxxxxxxx:role/<IAM_ROLE_NAME>
    volumeType: gp3
    volumeSize: 50
    privateNetworking: true
    volumeEncrypted: true
    volumeKmsKeyID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      'app': 'myapp'
      'owner': 'myowner'
      allow: true
      publicKeyName: demo
      - sg-abc134
      - sg-def345
  # ARN of the KMS key
  keyARN: "arn:aws:kms:us-west-1:000000000000:key/00000000-0000-0000-0000-

In-Place Upgrade Enhancements

Keep your worker node OS patched and up to date. Perform seamless AMI updates for both managed and self managed node groups for both EKS Optimized AMIs and Custom AMIs.

AMI Upgrades

Force Delete

For scenarios where the underlying infrastructure in AWS has been deleted out-of-band or if the access credentials have been revoked, the administrator can now force delete the cluster in the controller.

Backup and Restore

Turnkey workflows for cluster disaster recovery use cases such as cluster migration and cluster cloning ensuring that these can be performed by admins in a reliable and standardized manner. Users are now provided with a 1-click workflow to initiate and perform restore operations from a backup.

1-Click Restore

Cluster Blueprints

Addons from repository

In addition to being able to upload Helm and k8s yaml artifacts for addons to the controller, they can now also be created referencing the artifacts from Git and Helm repositories in conformance with the GitOps paradigm.

Addons Repository

Minimal Cluster Blueprint

In addition to the default cluster blueprint, administrators now also have the option to select a minimal cluster blueprint. This is a lightweight blueprint that does not come with addons for monitoring, logging etc and is well suited for resource constrained Kubernetes deployments and environments where organizations have existing solutions for critical capabilities such as monitoring, logging etc. Note that for clusters with the minimal blueprint, the cluster dashboards will provide significantly scaled down visualization and metrics.

Minimal Blueprint

Search for Addons and Blueprints

It is common for organizations to have 100s of addons and blueprints. Administrators can now leverage the builtin search capability to quickly find the addons they are looking for from the available list of Addons and Blueprints. Further, while adding specific addons to a cluster blueprint, administrators can use the search functionality to quickly find and select the relevant addons resulting in increased productivity and better user experience.

Addon Dependency Management

In a blueprint that comprises multiple add-ons, there are situations where certain addons can be applied if and only when certain addons are already deployed and operational on the cluster. This calls for an acyclic graph execution model wherein certain components in a blueprint can be created/updated in parallel and certain ones based on availability of pre-requisites. Administrators can now specify and configure dependencies while creating a blueprint and the controller will take care of implementing dependency management across addons in cluster management operations.

Addon Dependency

Cluster overrides for addons

When deploying addons to a fleet of clusters, there can be situations where certain resources need to be customized at cluster level (customizable to a single cluster or group of clusters). With cluster overrides, the same addon can now be deployed on a fleet of clusters with customizable configurations that can differ on a cluster to cluster basis. Internally, this feature uses the generic capability of k8s labels and label selectors to match resources, replaceable values, target clusters. This makes it immensely flexible and powerful to be applied on a wide range of customer scenarios.


Cluster overrides for workloads

When deploying workloads to a fleet of clusters, there can be situations where certain resources need to be customized at a cluster level. With cluster overrides, the same workload can now be deployed on a fleet of clusters with customizable configurations that can differ on a cluster to cluster basis. The cluster overrides feature internally uses the generic capability of k8s labels and label selectors to match resources, replaceable values, target clusters. This makes it immensely flexible and powerful to be applied on a wide range of customer scenarios.

Debug and Troubleshooting enhancements

An intuitive and detailed debug workflow powered by the underlying zero trust control channel is now available for workloads. This provides users with end-to-end traceability and detailed visibility into all k8s resources associated with a workload. Users can also efficiently debug and troubleshoot issues using built in conveniences for viewing “k8s events”, “logs” and even perform remote kubectl exec operations on remote containers at the click of a button. In addition to current state, users are also provided insight into trends of critical k8s resources associated with their workloads.

Workload Debug

Multiple values in Helm3 workloads

Support for linking of multiple value files with a single Helm chart so as to facilitate advanced Helm3 chart customizations.

Multiple Values


CD Agent Lifecycle Management

Support of multiple versions of agents on the same cluster and the ability to activate/deactivate specific agents. Administrators can now see the exact version of the cd agents and manage the lifecycle of each agent individually providing them fine-grained control over the lifecycle.

GitOps Agent

Workload templates

In some cases, organizations need to associate the same workload with different pipelines. For example, separate pipelines for dev, staging and production. Instead of creating and maintaining separate workloads per pipeline, users can create a workload template that can be associated with one or more pipelines with customizable values. The customizable values can be either provided at configuration time or can be dynamically populated by the system based on evaluation of custom variables and expressions.

Workload Templates

Webhooks for GitLab repository

Adds first class support for managing webhooks from GitLab repositories for GitOps pipeline triggers.

GitLab Webhook

Infra Provisioning Stage

Support for a generic terraform provisioning stage to plan and apply infrastructure changes as part of the GitOps continuous delivery pipeline. Users will have the option to configure terraform stages and link that with approval and workload deployment stages to realize a highly customizable and effective continuous delivery pipeline

Infra Provisioner

Stage preconditions

Facilitates conditional execution of stages in a pipeline. Users can attach one or more conditions to a stage in their GitOps pipeline. The pipeline will make sure that the runtime execution of stage happens only if the conditions are satisfied at runtime.

Stage Preconditions

Approval workflow enhancements

In the approval stage, customers can now specify one or more users as approvers. Only these users will have the privilege to approve once workflow reaches the identified approval stage. If more than one user is present in approver’s list, approval from any one of them will be sufficient. If customers want to model a workflow where multiple approvals are mandatory, they can link multiple approval stages in a sequence with specific users.

Users for Approval Stage

Role Based Access Control

Resource sharing and Governance

Create, manage and share organization-level objects such as cloud credentials, clusters, blueprints and addons with all or specifically identified projects in the Organization. This workflow enables organizations to implement and centralize standards across all projects in their organization, achieve governance and enforce policies.

Resource Sharing

API keys for SSO users

Support for management of API keys for Single Sign On (SSO) users. SSO users can now use the RCTL CLI which uses API keys to make REST API calls to the controller for day to day operations.

Alerts for Project Users

Users with "project admin" or "read only project admin" roles are now provided visibility to pre-filtered audit logs for the projects they have access to.

Alerts for Project Admins


Forward Proxy Support

Organizations that require the use of an explicit forward proxy for all outbound https requests to the Internet can now explicitly specify the forward proxy details for ongoing control channel communications between the managed cluster and the SaaS controller.

Forward Proxy

Alerts & Audits

Whitelabel support for email communications

Details in email notification templates for alerts and approvals for GitOps pipelines can now be configured and customized per partner.

Pipeline agent health checks

Alerts are now automatically generated if the CD agent health deteriorates and requires immediate attention.

Audit trail for namespaces

Audit events are generated and captured for administrative actions on namespaces.

Bug Fixes

Bug ID Description
RC-10110 API only Namespace Admin user is not able to download kubeconfig via API or RCTL
RC-10104 CPU and Memory Utilization query issue with Multi container Pods - should include the pod name along with container name
RC-9982 Pipeline approval email is not working for partners
RC-9962 For helm repo workload/addon, if there is no Chart Version provided, pull the latest version of the helm chart
RC-9955 Crictl warning for runtime with K8S 1.20.5
RC-9826 Create Project: Do not Allow use of Capitals in Name
RC-9743 When there are multiple resources in the template with helm hook all the resources in the file are not getting installed in the cluster
RC-9706 Root filled in one day by syslog
RC-9705 Opening a project shows overlapping names
RC-9703 Alert emails have typos in the Possible Impact section
RC-9702 Cluster -> Resources -> Workloads -> Published Date is not correct
RC-9700 Workload Publish Success false positive when updating workload
RC-9698 Pod email alert message for the Possible Impact section should be corrected
RC-9697 Swagger documentation update for delete cluster operation
RC-9696 Alert emails -> When cluster/pod health is restored the email has wrong
RC-9692 Infrastructure -> PSP -> Unable to delete PSP which name is more than 63 characters
RC-9691 Infrastructure -> PSP -> When we exceed the name field length, no error is raised
RC-9689 GitOps Pipelines -> Create new pipeline -> Description field doesn't seem to have a char limit
RC-9687 Integrations -> Registry -> When we exceed the name field length, incorrect error is raised
RC-9686 System -> Projects -> When we exceed the name or description field length, incorrect error is raised
RC-9685 System -> Users -> Unable to search by firstname or lastname
RC-9684 System -> Users -> When we exceed the fields length, incorrect error is raised
RC-9683 System -> Users -> Alerts Recipients -> Email field should have limit on max character
RC-9656 Search for Workload with hostname does not work if giving the whole link of the hostname
RC-9653 Workload -> Debug : Age is not shown in days
RC-9652 Placement policy using value only label does not work for workload
RC-9651 Rafay Clusters - blueprint filter is not returning the same results
RC-9647 Remove Monitors setting for the cluster from UI
RC-9646 Remove maintenance mode from UI
RC-9645 Pods, Events and Trends icons are not working
RC-9644 PV are not displayed in the cluster resources tab
RC-9641 Systems > Alerts are not filtered correctly by severity
RC-9640 Workload count from the list view is not displaying the correct values
RC-9639 System -> Audit logs -> Kubectl logs filter issues
RC-9636 Empty Error red message box displayed when creating the invalid user
RC-9635 Right after the workload is published, check the events in the Cluster Dashboard > Resource > Namespace, the "Age" show weird time "-1y-1d" for the 1st minute
RC-9591 Zero trust kubectl access to cluster lost for 15 mins
RC-9449 Managed ingress doesn't get enabled from RCTL
RC-9394 Enable ALB and EFS additional IAM roles for EKS cluster does not take effect
RC-9390 UI: Hide EKS NodeGroup Node AMI Family option depend on version selected for the EKS cluster
RC-9375 Exclude kube-system, rafay-system, rafay-infra from secret admission webhook
RC-9357 Not able to create an API key for org admin user using partner admin API Key
RC-9277 Kubectl console from the UI is overlapping with the dashboard
RC-9276 Pagination is not reset when changing filter in the cluster Resources Dashboard page
RC-9269 Swagger-API: Cleanup warnings/errors during generation of SDK
RC-9234 Workload with registry integration is failed to publish due to client pool timed out
RC-9231 Should validate addon name to under 63 characters to avoid blueprint sync status stuck in In Progress forever
RC-9192 Workload in Publish tab shows Config changed republish though no config is changed; is_dirty flag is getting set to true
RC-8935 Run 'vault-init' and 'vault-sidecar' container as non privileged, nonroot user
RC-8409 Clusters view is not displayed correctly in window mode
RC-7702 End of Life Software =IP.I.15 for openresty


05 Apr, 2021

No new features were introduced in this patch.

Bug Fixes

Bug ID Description
RC-9743 For helm3 workloads, when there are multiple resources defined in a template, helm hooks are not getting executed

QCOW and OVA Image Updates

01 Apr, 2021

QCOW Image Update

Updated qcow and ova images (v1.4) are now available to customers. This is primarily an ongoing security update that incorporates the latest OS kernel updates, container images and refreshes the OS packages.

This is packaging only release focused on ensuring that newly provisioned clusters based on the qcow and ova images will not require post provisioning kernel level security patches to be applied requiring reboots etc.

Bug Fixes

This does not incorporate any new features or bug fixes. Exact list of file changes in the updated qcow image will be provided to customers and partners upon request.

v1.4.0 QCOW Image

24 Feb, 2021

QCOW Image Update

An updated qcow image (v1.4) is now available to customers. This is primarily an ongoing security update that incorporates the latest OS kernel updates, container images and refreshes the OS packages.

This is packaging only release focused on ensuring that newly provisioned qcow image based clusters will not require kernel level security patches to be applied post cluster provisioning.

Bug Fixes

This does not incorporate any new features or bug fixes. Exact list of file changes in the updated qcow image will be provided to customers and partners upon request.


19 Feb, 2021

Amazon EKS

The RCTL CLI based lifecycle management of Amazon EKS clusters has been enhanced to add support for "Volume Encryption", "GP3" and "Envelope Encryption of Secrets in etcd". All customers are recommended to update to the latest version of RCTL. View additional details here.

Kubernetes Patches

Support for latest updates of upstream k8s: v1.19.7, 1.18.15, 1.17.17. Customers are recommended to upgrade their managed clusters as quickly as possible to ensure they have the latest related updates.

k8s Upgrades

Upgrades of managed upstream k8s clusters are performed "in-place" with "zero downtime" and are completed in just a few minutes. See screenshot below for an example.

k8s Upgrades

Bug Fixes



9 Feb, 2021

Options for Blueprints

The log aggregation addon is no longer mandatory in the default cluster blueprint. Users can optionally deselect this addon from their custom blueprints. This can be useful for deployments where organizations may have standardized on an alternate log aggregation technology.

Optional Log Aggregation Addon

Defaults for OVA based Clusters

Default settings for the OVA based cluster provisioning wizard have been updated to streamline the user experience. With this update, users can provision OVA image based clusters in a single click.

Bug Fixes



27 Jan, 2021

No new features were introduced in this patch.

Bug Fixes

Bug ID Description
RC-9331 UI sets the wrong IP address format when the interface name is long causing cluster provisioning failures
RC-9233 Change the UI labels to reflect the right units for the workload custom container image
RC-9543 Blank page on session expiry at console login page