Skip to content

Supported Environments

Please review the information listed below to understand the supported environments and operational requirements.

Operating Systems

Here is the list of supported operating systems

Master (Control Plane)

  • Ubuntu 22.0.4 LTS (64-bit)
  • Ubuntu 20.0.4 LTS (64-bit)
  • Ubuntu 18.0.4 LTS (64-bit)
  • CentOS 7 (64-bit)
  • RHEL 8.x (64-bit)
  • RHEL 7.x (64-bit)

Worker Nodes

  • Ubuntu 22.0.4 LTS (64-bit)
  • Ubuntu 20.0.4 LTS (64-bit)
  • Ubuntu 18.0.4 LTS (64-bit)
  • CentOS 7 (64-bit)
  • RHEL 8.x (64-bit)
  • RHEL 7.x (64-bit)
  • Windows Server 2019 (64-bit)


Windows worker nodes require a minimum version of Kubernetes (v1.23.x or higher) and the Calico CNI.

Kubernetes Versions

The following versions of Kubernetes are currently supported. New clusters can be provisioned using the following Kubernetes versions.

  • Four versions of Kubernetes are supported at any given time.
  • Once a new version of Kubernetes is added, support for the oldest version is removed.
  • Customers are strongly recommended to upgrade their clusters to a supported version to ensure they continue to receive patches and security updates.
Version Support Timelines
v1.26.x EOL when v1.30.x is supported
v1.25.x EOL when v1.29.x is supported
v1.24.x EOL when v1.28.x is supported
v1.23.x EOL when v1.27.x is supported
v1.22.x EOL when v1.26.x is supported

Container Networking (CNI)

The following CNIs are supported for Upstream Kubernetes on bare metal and VM based environments.

CNI Description
Calico Recommended for both Linux and Windows nodes
Canal Calico + Flannel
Flannel Deprecated. Not recommended for new clusters

CPU and Memory

The minimum resource requirements for a single node, converged cluster with the "minimal" cluster blueprint are the following:

Resource Minimum
vCPUs per Node Two (2)
Memory per Node Four (4) GB


Ensure you provision additional resources if you wish to update/deploy other types of blueprints that will deploy additional software on the cluster such as monitoring, storage etc.


Nvidia GPUs compatible with Kubernetes are supported. Follow these instructions if your workloads require GPUs.

Container Runtime

Starting k8s v1.20.x, support for Dockershim has been removed. New clusters will be provisioned with the containerd CRI. When older versions of k8s are upgraded in-place, they will also be upgraded to use the containerd CRI. Customers should therefore account for their k8s resources being restarted.

"containerd" is a container runtime that implements the CRI spec. It pulls images from registries, manages them and then hands over to a lower-level runtime, which actually creates and runs the container processes. Containerd was separated out of the Docker project, to make Docker more modular.

Inter-Node Networking

For multi node clusters, ensure that the nodes are configured to communicate with each other over all UDP/TCP ports.

Network Rules: Control Plane

Ensure that network rules on the control plane (aka. master) nodes are configured for the ports and direction described below.

Protocol Direction Port Range Purpose
TCP Inbound 6443 k8s API Server
TCP Inbound 2379-2380 etcd Client API
TCP Inbound 10250, 10255 kubelet API
TCP Inbound 10259, 10251 kube-scheduler
TCP Inbound 10257, 10252 kube-controller-manager
UDP Inbound 8285 Flannel CNI
TCP Inbound 30000-32767 If nodePort needs to be exposed on control plane
TCP Inbound 9099 Calico CNI
TCP Inbound 5656 OpenEBS Local PV
UDP Inbound 4789 vxlan

Network Rules: Node

Ensure that the network rules on the nodes (aka. worker) are configured for the ports and direction described below.

Protocol Direction Port Range Purpose
TCP Inbound 10250, 10255 Kubelet API
TCP Inbound 30000, 32767 NodePort Services
UDP Inbound 8285, 8472 Flannel CNI
TCP Inbound 8500 Consul
UDP Inbound 8600 Consul
TCP/UDP Inbound 8301 Consul
TCP Inbound 9099 Calico CNI
TCP Inbound 5656 OpenEBS Local PV
UDP Inbound 4789 vxlan

Forward Proxy

Enable and configure this setting if your instances are not allowed direct connectivity to the controller and all requests have to be forwarded by a non-transparent proxy server.


Multiple turnkey storage integrations are available as part of the standard cluster infrastructure blueprint. These integrations dramatically simplify and streamline the operational burden associated with provisioning and management of Persistent Volumes (PVs) especially for bare metal and VM based environments.

We have worked to eliminate the underlying configuration and operational complexity associated with storage on Kubernetes. From a cluster administrator perspective, there is nothing to do other than "select" the required option. These turnkey storage integrations also help ensure that stateful workloads can immediately benefit from "dynamically" provisioned PVCs.

Local PV

Required/mandatory storage class.

  • Based on OpenEBS for upstream Kubernetes clusters on bare metal and VM based environments.

  • Based on Amazon EBS for upstream Kubernetes clusters provisioned on Amazon EC2 environments. Requires configuration with an appropriate AWS IAM Role for the controller to dynamically provision EBS based PVCs for workloads.

A Local PV is particularly well suited for the following use cases:

  • Stateful workloads that already capable of performing their own replication for HA and basic data protection. This eliminates the need for the underlying storage to copy or replicate the data for these purposes. Good examples are Mongo, Redis, Cassandra and Postgres.

  • Workloads that need very high throughput (e.g. SSDs) from the underlying storage with the guarantee that data consistency on disk

  • Single Node, converged clusters where networked, distributed storage is not available or possible (e.g. developer environments, edge deployments)

Distributed Storage

This is optional for customers and based on Rook-Ceph. This option is well suited for environments that need to provide a highly available, shared storage platform. This allows pods to be rescheduled on any worker node on the cluster and still be able to use the underlying PVC transparently.


The GlusterFS based managed storage option was deprecated in Q1 2022 and projected to be EOL in Q1 2023.

Storage Requirements

Use the information below to ensure you have provisioned sufficient storage for workloads on your cluster.

Root Disk

The root disk for each node is used for the following:

  • Docker images (cached for performance)
  • Kubernetes data and binaries
  • etcd data
  • consul data
  • system packages
  • Logs for components listed above

Logs are automatically rotated using "logrotate". From a storage capacity planning perspective, ensure that you have provisioned sufficient storage in the root disk to accommodate your specific requirements.

  • Raw, unformatted
  • Min: 50 GB, Recommended: >100 GB


On a single node cluster, a baseline of 30 GB of storage to store logs, images etc is required. The remaining 20 GB will be used for PVCs used by workloads. Allocate and plan for additional storage appropriately for your workloads.

Secondary Disk

OPTIONAL and required only if the GlusterFS storage class option is selected. This is dedicated and used only for end user workload PVCs

  • Raw, unformatted
  • Min: 100 GB, Recommended: >500 GB per node