Overview
Overview¶
This documentation provides an overview of system template for Rafay Managed Kubernetes Clusters.These templates are designed to simplify the provisioning, configuration, and management of Kubernetes clusters.
Initial Setup¶
The platform team is responsible for performing the initial configuration and setup of the MKS template. The sequence diagram below outlines the high-level steps. In this process, the platform team will configure and share the template from the system catalog to the project they manage and then share the template downstream with the end user.
sequenceDiagram
participant Admin as Platform Admin
participant Catalog as System Catalog
participant Project as End User Project
Admin->>Catalog: Selects MKS Template from System Catalog
Admin->>Project: Shares Template with Predefined Controls
Project-->>Admin: Template Available in End User's Project
End User Flow¶
The end user launches a shared template, provides required input values, and deploys the cluster.
sequenceDiagram
participant User as End User
participant Project as Rafay Project
participant Cluster as Rafay Managed Kubernetes Cluster
User->>Project: Launches Shared Template for MKS
User->>Project: Provides Required Input Values (API Key, Node Configuration, SSH Details)
User->>Project: Clicks "Deploy"
Project->>Cluster: Provisions a Rafay Managed Kubernetes Cluster on the specified nodes
Cluster-->>User: Cluster Deployed Successfully
Cluster-->>User: Provides Kubeconfig File as Output
This system template allows you to configure, templatize, and provision a Rafay Managed Kubernetes Cluster (Rafay MKS) on any supported operating system. For more details, refer to this document.
The templates are designed to support both:
- Day 0 operations: Initial setup
- Day 2 operations: Ongoing management
Infrastructure Types¶
The template supports provisioning Rafay MKS clusters on various infrastructure types, including:
- Bare Metal: Users manage the lifecycle of hardware and the operating system.
- Virtual Machines: Supports Bring Your Own OS or pre-packaged images (e.g., QCOW2, OVA formats).
- Public Cloud: Flexible deployments on cloud infrastructure.
Key Capabilities¶
This template enables users to:
- Provision and manage the lifecycle of Rafay Managed Kubernetes Clusters.
- Configure:
- Container Network Interface (CNI)
- Add-ons defined in the cluster blueprint.
As part of the output, users receive a kubeconfig file with cluster-wide privileges for secure access.
Resources¶
This system template will deploy the following resources:
- Upstream Kubernetes on the specified nodes.
Pre-Requisites¶
-
API Key: Specify the API key of the controller for
API Key
input variable -
Private SSH Key: Provide the SSH key of the node to run the installer for Kubernetes deployment.
- Node Information:
- Specify details for:
- Control Plane Node(s)
- Worker Node(s)
- Specify details for:
Input Variables¶
Cluster Config
Name | Default Value | Value Type | Description |
---|---|---|---|
System Components Placement | { "node_selector": {}, "tolerations": [] } |
JSON | Enter node selectors and tolerations for the cluster |
High Availability (HA) | false |
Text | Allowed: [true, false] Select if HA should be enabled |
Cluster Name | $(environment.name)$ |
Expressions | Enter the name of the Upstream Kubernetes cluster |
Cluster Kubernetes Version | v1.31.4 |
Text | Allowed: [v1.32.0, v1.31.4, v1.30.8, v1.29.12] Select the Kubernetes version for the cluster |
Network | { "cni":{"name":"Calico","version":"3.26.1"}, "pod_subnet":"10.244.0.0/16", "service_subnet":"10.96.0.0/12" } |
JSON | Enter the network information |
Control Plane Node(s) | { "hostname-1": { "arch": "amd64", "hostname": "hostname-1", "private_ip": "10.1.0.67", "operating_system": "Ubuntu22.04", "roles": ["ControlPlane", "Worker"], "ssh": { "ip_address": "129.146.178.0", "port": "22", "private_key_path": "private-key", "username": "ubuntu" } } } |
JSON | Provide the control plane node information. The variable should match the node's hostname (e.g., 'hostname-1' )` |
Worker Node(s) | { "worker-1": { "arch": "amd64", "hostname": "worker-1", "private_ip": "10.1.0.68", "kubelet_extra_args": { "max-pods": "400", "cpu-manager-reconcile-period": "40s" } } } |
JSON | Provide the worker node information. The variable should match the node's hostname (e.g., 'worker-1' ) |
Kubernetes Upgrade | { "strategy":"sequential", "params":{"worker_concurrency":"50%"} } |
JSON | Enter the upgrade strategy for the cluster |
Cluster Labels | { "env": "dev", "release": "stable" } |
JSON | Enter any labels to assign to the cluster |
Cluster Location | sanjose-us |
Text | Enter the location label where the cluster will be deployed |
Cluster Project | $(environment.project.name)$ |
Expressions | Enter the project for the Upstream cluster |
Auto Approve Nodes | true |
Text | Allowed: [true, false] Select if nodes should be auto-approved |
Cluster Dedicated Control Planes | false |
Text | Allowed: [true, false] Select if dedicated control planes should be enabled |
Kubelet Extra Args | { "max-pods": "300", "cpu-manager-reconcile-period": "30s" } |
JSON | Specify additional kubelet arguments if needed |
Installer TTL | 300 |
Text | Time to live (TTL) for the installer pod, in seconds |
Blueprint Config
Name | Default Value | Value Type | Description |
---|---|---|---|
Blueprint Name | default |
Text | Enter the name of the blueprint assigned to the cluster |
Blueprint Version | latest |
Text | Specify the version of the blueprint for the cluster. For system blueprints, use "latest" |
Kata and OPA Config
Name | Default Value | Value Type | Description |
---|---|---|---|
Enable Kata Deployment | false |
Text | Allowed: [true, false] Enable Kata containers support |
Enable OPA Gatekeeper Deployment | false |
Text | Allowed: [true, false] Enable Open Policy Agent Gatekeeper integration |
OPA Excluded Namespaces | ["kube-system", "gatekeeper-system"] |
JSON | List the namespaces to exclude from OPA Gatekeeper policies |
Opa Constraint Template YAML | apiVersion: templates.gatekeeper.sh/v1 kind: ConstraintTemplate metadata: name: k8srequiredlabels spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: type: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{"msg": msg}] { missing := {l | l := input.parameters.labels[_]; not input.review.object.metadata.labels[l]} count(missing) > 0 msg := sprintf("missing labels: %v", [missing]) } |
Text | Provide the path or content of the OPA Gatekeeper constraint template YAML file |
OPA Constraints YAML | apiVersion: constraints.gatekeeper.sh/v1beta1 kind: Host-Filesystem metadata: name: host-filesystem spec: match: kinds: - apiGroups: [""] kinds: ["Pod"] parameters: allowedHostPaths: - pathPrefix: "/var/lib/kubelet" - pathPrefix: "/var/run/secrets/kubernetes.io" - pathPrefix: "/etc/kubernetes" - pathPrefix: "/var/lib/vcluster" - pathPrefix: "/tmp" |
Text | Provide the path or content of the OPA Gatekeeper constraints YAML file |
Access Config
Name | Default Value | Value Type | Description |
---|---|---|---|
Cloud Credentials | upstream-cloud-credential |
Text | Enter the cloud credentials. Leave this field empty to use the SSH key |
Proxy Config | { "proxy_config": { "enabled": false, "allow_insecure_bootstrap": true, "bootstrap_ca": "cert", "http_proxy": "http://proxy.example.com:8080/", "https_proxy": "https://proxy.example.com:8080/", "no_proxy": "10.96.0.0/12,10.244.0.0/16", "proxy_auth": "proxyauth" } } |
JSON | Enter the proxy configuration details, including authentication and certificate information |
Username | demouser |
Text | Enter the username for the installer or cluster access |
Project | default-project |
Text | Enter the project name if different from the environment’s default |
Launch Time¶
The estimated time to launch an MKS cluster using this template is approximately 15 to 20 minutes.