Releases - Sept 2023¶
v1.29 Preview - SaaS¶
21 Sept, 2023
Full-stack environment provisioning through templates¶
Typical operating environment for an application includes a mix of K8s and non-K8s infrastructure resources. Environment manager allows platform teams to stitch these resources together into full-stack environment templates that contain all necessary dependencies, policies and configuration.
Self-service for application teams¶
Platform teams can expose the environment templates and enable a one-click workflow for application teams to provision environments required for their applications. This accelerates developer productivity/agility while also ensuring that the necessary guardrails in place.
Flexible framework to build templates¶
Through a combination of platform constructs such as contexts, static resources, resource templates and environment templates, Environment Manager provides a very flexible framework for platform teams to build "ready to use templates" for environments.
Leverage existing Terraform IaC Templates¶
Environment Manager supports TF as the provider. Any existing TF IaC artefacts can be easily leveraged to build resource/environment templates.
Environment provisioning in private data centers¶
The solution also supports scenarios where TF artefacts are in private repositories and the environments need to provisioned in a private data center.
Learn more about this new service here.
Role ARN based cloud credentials¶
To make it easier to identify role ARN-based cloud credentials, UI has been enhanced to display the account ID within the role ARN. By default, the account ID is masked. Users can see the details by clicking "Show ARN."
Amazon EKS and Azure AKS¶
Several improvements have been added to the Fleet Operations feature with this release including support for
Learn more about this capability here.
New GKE clusters can now be provisioned based on Kubernetes v1.27.
Only new cluster provisioning is supported for Kubernetes v1.27.x. Support for upgrading existing clusters managed by the controller "in-place" to Kubernetes v1.27 will be available with an upcoming release
Auto upgrade of nodes¶
In this release, we have added an option for automatic upgrade of nodes as part of node pool configuration. This feature will help you keep the nodes in your cluster up-to-date with the cluster control plane version.
Upstream Kubernetes for Bare Metal and VMs¶
New upstream clusters can be provisioned based on Kubernetes v1.28.x. Existing upstream Kubernetes clusters managed by the controller can be upgraded in-place to Kubernetes v1.28.
Upstream Kubernetes clusters based on Kubernetes v1.28 (and prior Kubernetes versions) are fully CNCF conformant.
Node labels and Node taints¶
A previous release included the ability to add and delete node labels and node taints (includes Day 2 support) using RCTL. This release extends the ability to do so via the UI.
Upgrade plan optimization improvements¶
A previous release added an option for users to orchestrate node upgrades in parallel. This release includes several UX improvements to this feature and adds the ability to orchestrate node groups concurrently via RCTL.
Machine Health checks (MHC)¶
Ability to configure Machine health checks have been added for vSphere clusters with this release. Users can set conditions for identifying unhealthy machines in the cluster and trigger automatic remediation to enhance cluster health and reliability.
Backup and Restore¶
It is now possible to automate the workflow for backup/restore operations (e.g. data backup location, policies) via Swagger APIs.
Lineage of resources¶
Support has been added to track the lineage information for resources during the initial Git-to-system sync. This is to ensure that resources aren't created/deleted inadvertently by the user.
Example scenarios include:
User creates a duplicate manifest file in the Git Repo. System Sync pipeline will now show an appropriate error message
User edits the object name in the resource manifest. System Sync pipeline will prevent a new resource from being created and deny action on the affected resource
There are certain scenarios where the webhook that is deployed to the clusters to prevent drift in configuration for add-ons and workloads needs to be disabled. This release provides the ability to do so as an Org/Project level configuration or granularly as part of the Blueprint configuration. By default, the drift webhook is enabled.
It is now possible to delete IDP users from the "controller application". This enables platform admins to clean up IDP users who no longer exist and also ensures that if the user is recreated in the IDP portal and assigned a new group, the user does not have access to resources associated with any previous group associations.
Several improvements have been implemented with this release to aid customers with 'cluster right-sizing' and 'application right-sizing' optimization exercises.
- Inclusion of additional columns around CPU and Memory utilization metrics
- Trend for efficiency scores around Cost, CPU and Memory
Limited Access - This capability is enabled selectively for Orgs and is not available to all Preview Orgs.
Additions to System Catalog¶
The System Catalog has been updated to add support for the following repositories.
v1.28 - Bug Fixes¶
|RC-27250||Unable to create a Cloud Credential for 'Data Backup' of type 'Role' via RCTL v3 or TF|
|RC-22284||Pod status in UI take does not take the pod state into consideration|
|RC-29846||EKS: Version mismatch for control plane when the cluster is upgraded from the EKS console|
|RC-27783||Storage requests quotas for Namespace and Project is being incorrectly sent from UI to backend|
|RC-27253||Creating cloud credentials using RCTL or TF without the project sharing field configured throws an error|
|RC-22348||When namespaces are implicitly created through add-ons, they are not synced back to the controller with namespace sync configuration enabled|
|RC-22330||Rctl apply using v3 spec does not upload artifacts for helm3 add-ons|
|RC-21551||No validation for configuration of placement as part of cluster override|
|RC-18635||Error when setting the “Cluster Endpoint Access” to “Allowed” in the EKS cluster template|
v1.1.17 - Terraform Provider¶
11 Sep, 2023
An updated version of the Terraform provider is now available. Please refer to the documentation for additional details.
This release includes the following enhancements:
Support for Fleet Plan resource
Ability to specify environment & K8s distribution details when importing a cluster through TF
|RC-20131||Unable to create a deactivated GitOps Agent or deactivate an active GitOps agent|
|RC-16258||EKS: Unable to update tags for cluster and NG post cluster provisioning|
|RC-22343||EKS: TF apply action incorrectly throws an error on adding a new NG|
|RC-27276||EKS: Even when a different version is specified for NG, they are getting created with the version that was specified for the control plane|
|RC-27108||EKS: Re-applying TF indicates that there are changes in the add-on section after upgrading the EKS cluster even when there are no changes|
|RC-28441||Unable to update cluster label|