Skip to content

CLI

RCTL can be used to manage the end-to-end lifecycle of a workload from an external CI system such as Jenkins etc. The table below describes the workload operations that can be automated using RCTL.

Resource Create Get Update Delete Publish Unpublish
Workload YES YES YES YES YES YES

Create Workload

Create a new workload using a definition file as the input. The workload definition is a YAML file that captures all the necessary details required for the Controller to manage the lifecycle of the workload.

Important

It is strongly recommended that customers version control their workload definition files in their Git repositories.

./rctl create workload <path to workload definition file>

Helm Chart Workloads

Both Helm v3 and v2 are supported.

  • For Helm v3 based workloads, the controller acts as the Helm client.
  • For Helm v2 based workloads, the controller translates the payload into k8s resources before applying it to the cluster.

An illustrative example for a Helm based workload definition is shown below

#Provide unique name for workload
name: voting-results
#Indicate namespace where the workload should be deployed
namespace: vote
#Indicate the name of the Project where the workload should be deployed
project: defaultproject
#Specify type of workload (Helm or Helm3)
type: Helm3
#Specify list of clusters. Use comma as delimiter
clusters: eks-oregon-dev, eks-london-dev
#Use backslash for Windows and forward slash for Linux and macOS
#For Linux and macOS
payload: "./results.tgz"
values: "./values-dev.yaml"

Helm Workloads from different Repos

Below is an example config file to create a workload with Helm Chart and values from different repositories

kind: workload-helm
metadata:
  name: rctl-helm-wk11
  namespace: demorctl1
  type: Helm3
  clusters: demorctlmks1
  driftaction: "BlockAndNotify"
  repository_ref: default-bitnami
  repo_artifact_meta:
    helm:
      tag: 8.5.4
      chartName: apache
  value_repository_ref: testrepo1
  additional_reference:
    git:
      repoArtifactFiles:
        - name: apache_value
          relPath: apache-values.yaml
          fileType: HelmValuesFile
      revision: main
  extra:
    helm:
      atomic: false
      cleanUpOnFail: true

k8s Yaml Workloads

An illustrative example for a k8s YAML based workload definition is shown below

#Provide unique name for workload
name: voting-results
#Indicate namespace where the workload should be deployed
namespace: vote
#Indicate the name of the Project where the workload should be deployed
project: defaultproject
#Specify type of workload
type: NativeYaml
#Specify list of clusters. Use comma as delimiter
clusters: eks-oregon-dev, eks-london-dev
#Specify the k8s YAML file
#Use backslash for Windows and forward slash for Linux and macOS
#For Linux and macOS
payload: ./results.yaml

List Workloads

Use RCTL to retrieve/list all workloads in the specified Project.

In the example below, the command will list all the workloads in clusters in the "qa" project.

./rctl get workload --project qa

NAME     NAMESPACE   TYPE         STATE   ID
apache   apache      NativeHelm   READY   2d0zjgk
redis    redis       NativeHelm   READY   k5xzdw2

The command will return all the workloads with metadata similar to that in the Web Console.

  • Name
  • Namespace
  • Type
  • State
  • ID

Wait Flag

RCTL provides an option for the users to wait and block the long running operations. When an automation pipeline logic becomes extremely convoluted, enabling the --wait flag helps to block and keep pulling the workload publish status (example: success/failure)

Once a workload is created or updated and has all the placement information to publish it on the cluster, the wait flag helps to get the backend information of a workload creation process flow upto the publishing stage. This get operation continues to wait and show the details and status until the publishing is successful. To view the entire operation details, use the below wait command

./rctl publish ns <workload-name> --wait

Example Outout

1.6431226870150652e+09  debug   commands/common.go:126  Prerun
1.643122687015836e+09   info    context/context.go:58   Context: {"config_dir":"/Users/gajanan/.rafay/cli","config_file":"config.json","verbose":true,"debug":true,"structured_output":false,"v3":false}
1.6431226870158641e+09  debug   config/config.go:156    Config path: /Users/gajanan/.rafay/cli/config.json
1.643122687017161e+09   info    authprofile/key_profile.go:33   creating headers
1.643122689538578e+09   debug   authprofile/key_profile.go:100  GET https://console.stage.rafay.dev/auth/v1/projects/kj3ry4m/
1.643122689538938e+09   debug   authprofile/key_profile.go:105  http response ok: {"id":"kj3ry4m","name":"test","description":"","created_at":"2021-09-01T10:19:03.070746Z","modified_at":"2021-09-01T10:19:03.070773Z","partner_id":"rx28oml","organization_id":"g29xn20","default":false}
1.6431226895396538e+09  info    config/config.go:268    Config: {"profile":"staging","skip_server_cert_check":"false","rest_endpoint":"console.stage.rafay.dev","ops_endpoint":"ops.stage.rafay.dev","project_id":"kj3ry4m","project_name":"test"}
1.643122689539732e+09   debug   commands/publish_workload.go:31 Start [rctl publish workload yamlingit]
1.643122689539839e+09   info    authprofile/key_profile.go:33   creating headers
1.643122691159372e+09   debug   authprofile/key_profile.go:100  GET https://console.stage.rafay.dev/config/v1/projects/kj3ry4m/workloads/list/basic_info/
1.643122691159722e+09   debug   authprofile/key_profile.go:105  http response ok: {"result":[{"name":"new-sclorg-nodejs","id":"koleelk","state":"READY","published":true,"namespace":"test","type":"Rafay"},{"name":"new-upgrade-helm-test","id":"29lz312","state":"READY","published":true,"namespace":"test","type":"HelmInGitRepo"},{"name":"new-helm-upgrade-test","id":"mx4prek","state":"NOT_READY","published":false,"namespace":"test","type":"Rafay"},{"name":"test-helm3","id":"k65nrpm","state":"READY","published":true,"namespace":"test","type":"HelmInGitRepo"},{"name":"workload1","id":"k0596zm","state":"READY","published":true,"namespace":"test","type":"NativeYaml"},{"name":"new-1","id":"mx4p1vk","state":"READY","published":true,"namespace":"test","type":"HelmInHelmRepo"},{"name":"helminhelm-1","id":"kedzpgm","state":"READY","published":true,"namespace":"gajanan-ns","type":"HelmInHelmRepo"},{"name":"nativeyamlupload","id":"kjn3dz2","state":"READY","published":true,"namespace":"gajanan-ns","type":"NativeYaml"},{"name":"helmingit","id":"k639vgk","state":"READY","published":true,"namespace":"gajanan-ns","type":"HelmInGitRepo"},{"name":"helm3-chart-upload-multi","id":"299wqe2","state":"READY","published":true,"namespace":"gajanan-ns","type":"NativeHelm"},{"name":"helm3-chart-upload","id":"mx76jvk","state":"READY","published":true,"namespace":"gajanan-ns","type":"NativeHelm"},{"name":"tesd","id":"kj07rlm","state":"READY","published":false,"namespace":"test","type":"NativeHelm"},{"name":"fdrts","id":"278orpm","state":"READY","published":true,"namespace":"test","type":"HelmInGitRepo"},{"name":"yamlingit","id":"ko8gzlm","state":"READY","published":false,"namespace":"gajanan-ns","type":"YamlInGitRepo"},{"name":"new-test-workload","id":"kndy59m","state":"READY","published":false,"namespace":"add-on-ns","type":"NativeHelm"},{"name":"new-bp-jhg","id":"k05go7m","state":"READY","published":false,"namespace":"new-ns","type":"NativeYaml"},{"name":"acr-native-yaml","id":"kndx08m","state":"READY","published":true,"namespace":"new-ns","type":"NativeYaml"},{"name":"acr-workload-wizard","id":"mx4rv0k","state":"READY","published":true,"namespace":"new-ns","type":"Rafay"},{"name":"acr-helm-workload","id":"ke13o9m","state":"READY","published":true,"namespace":"new-ns","type":"NativeHelm"},{"name":"workload-test","id":"29lzye2","state":"NOT_READY","published":true,"namespace":"test","type":"HelmInGitRepo"},{"name":"test-app","id":"ky7w80k","state":"READY","published":false,"namespace":"test","type":"Rafay"},{"name":"test-helm","id":"28j00ok","state":"READY","published":true,"namespace":"test","type":"HelmInGitRepo"}],"cli_response":{"api_version":"1.0"}}
1.6431226911619682e+09  info    authprofile/key_profile.go:33   creating headers
1.643122692850268e+09   debug   authprofile/key_profile.go:100  POST https://console.stage.rafay.dev/config/v1/projects/kj3ry4m/workloads/ko8gzlm/publish/ ""
1.643122692850357e+09   debug   authprofile/key_profile.go:105  http response ok: {"result":"published"}
1.643122692850385e+09   debug   commands/publish_workload.go:43 End [rctl publish workload yamlingit]
1.643122697851608e+09   info    authprofile/key_profile.go:33   creating headers
1.6431226991212978e+09  debug   authprofile/key_profile.go:100  GET https://console.stage.rafay.dev/v2/config/project/kj3ry4m/aggregate/status/yamlingit
1.643122699121475e+09   debug   authprofile/key_profile.go:105  http response ok: {"workloadName":"yamlingit","namespace":"gajanan-ns","snapshotName":"yamlingit-v2","revision":2,"workloadID":"27xrwvk","conditions":[{"type":"WorkloadSnapshotValidate","status":"NotSet","lastUpdated":"2022-01-25T14:58:12.961661907Z","reason":"not set"},{"type":"WorkloadSnapshotUnschedule","status":"NotSet","lastUpdated":"2022-01-25T14:58:12.961666885Z","reason":"not set"},{"type":"WorkloadSnapshotClusterDrifted","status":"NotSet","lastUpdated":"2022-01-25T14:58:12.961672828Z","reason":"not set"},{"type":"WorkloadSnapshotUpdateRepoArtifact","status":"Success","lastUpdated":"2022-01-25T14:58:15.242419449Z","reason":"artifacts synced"},{"type":"WorkloadSnapshotSchedule","status":"Success","lastUpdated":"2022-01-25T14:58:15.42659069Z","reason":"assigned"},{"type":"WorkloadSnapshotClusterDeployed","status":"Success","lastUpdated":"2022-01-25T14:58:16.449917292Z","reason":"deployed"},{"type":"WorkloadSnapshotClusterReady","status":"Pending","lastUpdated":"2022-01-25T14:58:16.449917509Z","reason":"deployed"}],"assignedClusters":[{"clusterID":"kedy6qm","clusterName":"gajanan-test-mks-1","reason":"assigned"}],"deployedClusters":[{"clusterID":"kedy6qm","clusterName":"gajanan-test-mks-1","reason":"deployed"}],"failedClusters":null,"readyClusters":null,"driftedClusters":null,"repoSourceVersion":"ac9e0a6590ffe6618889eba34db6aa701d471720"}
1.64312270412303e+09    info    authprofile/key_profile.go:33   creating headers
1.64312270535346e+09    debug   authprofile/key_profile.go:100  GET https://console.stage.rafay.dev/v2/config/project/kj3ry4m/aggregate/status/yamlingit
1.6431227053537128e+09  debug   authprofile/key_profile.go:105  http response ok: {"workloadName":"yamlingit","namespace":"gajanan-ns","snapshotName":"yamlingit-v2","revision":2,"workloadID":"27xrwvk","conditions":[{"type":"WorkloadSnapshotValidate","status":"NotSet","lastUpdated":"2022-01-25T14:58:12.961661907Z","reason":"not set"},{"type":"WorkloadSnapshotUnschedule","status":"NotSet","lastUpdated":"2022-01-25T14:58:12.961666885Z","reason":"not set"},{"type":"WorkloadSnapshotClusterDrifted","status":"NotSet","lastUpdated":"2022-01-25T14:58:12.961672828Z","reason":"not set"},{"type":"WorkloadSnapshotUpdateRepoArtifact","status":"Success","lastUpdated":"2022-01-25T14:58:15.242419449Z","reason":"artifacts synced"},{"type":"WorkloadSnapshotSchedule","status":"Success","lastUpdated":"2022-01-25T14:58:15.42659069Z","reason":"assigned"},{"type":"WorkloadSnapshotClusterDeployed","status":"Success","lastUpdated":"2022-01-25T14:58:16.449917292Z","reason":"deployed"},{"type":"WorkloadSnapshotClusterReady","status":"Success","lastUpdated":"2022-01-25T14:58:23.315082142Z","reason":"ready"}],"assignedClusters":[{"clusterID":"kedy6qm","clusterName":"gajanan-test-mks-1","reason":"assigned"}],"deployedClusters":[{"clusterID":"kedy6qm","clusterName":"gajanan-test-mks-1","reason":"deployed"}],"failedClusters":null,"readyClusters":[{"clusterID":"kedy6qm","clusterName":"gajanan-test-mks-1","reason":"ready"}],"driftedClusters":null,"repoSourceVersion":"ac9e0a6590ffe6618889eba34db6aa701d471720"}
1.643122705353995e+09   debug   output/exit.go:23   Exit 0

Note: Use the --wait flag only when publishing the workload


Publish Workload

Use RCTL to publish a workload to a fleet of clusters in a specified Project.

./rctl publish workload <workload name>

In the example below, the command will attempt to publish the "apache" workload in the "qa" project.

./rctl publish workload apache --project qa

Unpublish Workload

Use RCTL to unpublish a workload.

./rctl unpublish workload <workload name>

In the example below, the "apache" workload will be unpublished in the "qa" project

./rctl unpublish workload apache --project qa

Delete Workload

Use RCTL to delete a workload identified by name. Note that a delete operation will unpublish the workload first.

./rctl delete workload <workload name>

Status

Use this command to check status of a workload. The status of a workload can be checked on all deployed clusters with a single command.

./rctl status workload <workload name>

If the workload has not yet been published, it will return a "Status = Not Ready". If the publish is in progress, it will return a "Status = Pending". Once publish is successful, it will return a "Status = Ready". Status is presented by cluster for all configured clusters. The workload states transition as follows "Not Ready -> Pending -> Ready".

An illustrative example is shown below. In this example, the publish status of the workload is listed by cluster.

./rctl status workload apache --project qa

CLUSTER      PUBLISH STATUS   MORE INFO
qa-cluster   Published

Use the below command to fetch the real time detailed status of a workload like K8s object names, objects latest condition, and cluster events

./rctl status workload <WORKLOAD-NAME> --detailed --cluster=<CLUSTER-NAME>,<CLUSTER NAME 2>..<CLUSTER NAME N>

Note: The flag --cluster is optional and Helm2 type workload(s) is currently not supported

Example

./rctl status workload test-helminhelm --detailed --clusters=oci-cluster-1

Output

+---------------+----------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------+
| CLUSTER NAME  | K8S OBJECT NAME                                          | K8S OBJECT NAMESPACE | K8S OBJECT LATEST CONDITION                                                                                 |
+---------------+----------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------+
| oci-cluster-1 | test-helminhelm-nginx-ingress-controller                 | default-rafay-nikhil | -                                                                                                           |
|               | (Service)                                                |                      |                                                                                                             |
+---------------+----------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------+
| oci-cluster-1 | test-helminhelm-nginx-ingress-controller-default-backend | default-rafay-nikhil | -                                                                                                           |
|               | (Service)                                                |                      |                                                                                                             |
+---------------+----------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------+
| oci-cluster-1 | test-helminhelm-nginx-ingress-controller                 | default-rafay-nikhil | {"lastTransactionTime":"0001-01-01T00:00:00Z","lastUpdateTime":"2022-02-08T07:49:18Z","message":"ReplicaSet |
|               | (Deployment)                                             |                      | \"test-helminhelm-nginx-ingress-controller-568dd8fdb\" is                                                   |
|               |                                                          |                      | progressing.","reason":"ReplicaSetUpdated","status":"True","type":"Progressing"}                            |
+---------------+----------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------+
| oci-cluster-1 | test-helminhelm-nginx-ingress-controller-default-backend | default-rafay-nikhil | {"lastTransactionTime":"0001-01-01T00:00:00Z","lastUpdateTime":"2022-02-08T07:49:19Z","message":"Deployment |
|               | (Deployment)                                             |                      | has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"}          |
+---------------+----------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------+

EVENTS:
+-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| CLUSTER NAME                        | EVENTS                                                                                                                                                                                                                                                                               |
+-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| oci-cluster-1(default-rafay-nikhil) | LAST SEEN   TYPE      REASON              OBJECT                                                                          MESSAGE                                                                                                                                                    |
|                                     | 66s         Normal    Killing             pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-5648j                    Stopping container controller                                                                                                                              |
|                                     | 21s         Warning   Unhealthy           pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-5648j                    Readiness probe failed: Get "http://10.244.0.171:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)                |
|                                     | 61s         Warning   Unhealthy           pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-5648j                    Liveness probe failed: Get "http://10.244.0.171:10254/healthz": dial tcp 10.244.0.171:10254: i/o timeout (Client.Timeout exceeded while awaiting headers)  |
|                                     | 41s         Warning   Unhealthy           pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-5648j                    Liveness probe failed: Get "http://10.244.0.171:10254/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)                 |
|                                     | 11s         Warning   Unhealthy           pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-5648j                    Readiness probe failed: Get "http://10.244.0.171:10254/healthz": dial tcp 10.244.0.171:10254: i/o timeout (Client.Timeout exceeded while awaiting headers) |
|                                     | 9s          Normal    Scheduled           pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz                    Successfully assigned default-rafay-nikhil/test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz to nikhil-28-jan-02                                    |
|                                     | 8s          Normal    Pulled              pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz                    Container image "docker.io/bitnami/nginx-ingress-controller:1.1.0-debian-10-r34" already present on machine                                                |
|                                     | 8s          Normal    Created             pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz                    Created container controller                                                                                                                               |
|                                     | 7s          Normal    Started             pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz                    Started container controller                                                                                                                               |
|                                     | 6s          Normal    RELOAD              pod/test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz                    NGINX reload triggered due to a change in configuration                                                                                                    |
|                                     | 9s          Normal    SuccessfulCreate    replicaset/test-helminhelm-nginx-ingress-controller-568dd8fdb                   Created pod: test-helminhelm-nginx-ingress-controller-568dd8fdb-n7flz                                                                                      |
|                                     | 9s          Normal    SuccessfulCreate    replicaset/test-helminhelm-nginx-ingress-controller-default-backend-c57d46575   Created pod: test-helminhelm-nginx-ingress-controller-default-backend-ckk29z                                                                               |
|                                     | 8s          Normal    Scheduled           pod/test-helminhelm-nginx-ingress-controller-default-backend-ckk29z             Successfully assigned default-rafay-nikhil/test-helminhelm-nginx-ingress-controller-default-backend-ckk29z to nikhil-28-jan-02                             |
|                                     | 8s          Normal    Pulled              pod/test-helminhelm-nginx-ingress-controller-default-backend-ckk29z             Container image "docker.io/bitnami/nginx:1.21.4-debian-10-r53" already present on machine                                                                  |
|                                     | 8s          Normal    Created             pod/test-helminhelm-nginx-ingress-controller-default-backend-ckk29z             Created container default-backend                                                                                                                          |
|                                     | 7s          Normal    Started             pod/test-helminhelm-nginx-ingress-controller-default-backend-ckk29z             Started container default-backend                                                                                                                          |
|                                     | 66s         Normal    Killing             pod/test-helminhelm-nginx-ingress-controller-default-backend-cljz9z             Stopping container default-backend                                                                                                                         |
|                                     | 9s          Normal    ScalingReplicaSet   deployment/test-helminhelm-nginx-ingress-controller-default-backend             Scaled up replica set test-helminhelm-nginx-ingress-controller-default-backend-c57d46575 to 1                                                              |
|                                     | 9s          Normal    ScalingReplicaSet   deployment/test-helminhelm-nginx-ingress-controller                             Scaled up replica set test-helminhelm-nginx-ingress-controller-568dd8fdb to 1                                                                              |
|                                     |                                                                  |
+-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Update Workload Config

Use this when you need to update the "Workload Definition" for an existing workload. For example, you may want to add a new cluster location where the workload needs to be deployed.

Workload definitions can be updated even if it is already published and operational on clusters. Once the workload definition is updated, ensure that you follow up with a "publish" operation to make sure that the updated workload definition is applied.

./rctl update workload <path-to-workload-definition-json-file> [flags]

Download Workload Config

Users can download an existing workload's configuration from the Controller. A common use case is to retrieve this and store this in a version controlled Git repository. Users can also create the workload using the Console and then download the workload configuration via RCTL.

In the example below, we requested the download of the config for the workload "apache". The downloaded configuration file is available in "apache.yaml"

./rctl download workload apache --project qa

Meta file: apache.yaml
Payload file: ./apache-7.5.1.tgz
Values file: N/A

k8s Yaml Workload

This will download and save two files to the same folder as RCTL

  1. Meta File: Workload's description yaml
  2. Payload: The actual k8s yaml

Helm Workload

This will download and save three files to the same folder as RCTL

  1. Meta File: Workload's description yaml
  2. Payload: The Helm chart (TGZ file)
  3. Values: The values.yaml file if configured.

Validate Workload

Note

This is a convenience function that is supported only for workloads based on the "Workload Wizard".

Users can use this to validate their "workload definitions" before attempting to publish them to ensure they can identify misconfigurations earlier in the process.

./rctl validate workload <workload name>

Templating

Users can also create multiple workloads with a set of defined configurations. The template file contains a list of objects that helps to create multiple workload(s) from a single template.

Below is an example of a workload config template

# Generated: {{now.UTC.Format "2006-01-02T15:04:05UTC"}}
#      With: {{command_line}}
{{ $envName := environment "PWD" | basename}}
{{ $glbCtx := . }}{{ range $i, $project := .ProjectNames }}
apiVersion: apps.k8smgmt.io/v3
kind: Workload
metadata:
  name: node
  project: {{$envName}}-{{$project}}
spec:
  artifact:
    artifact:
      catalog: default-bitnami
      chartName: node
      chartVersion: {{$glbCtx.NodeChartVersion}}
    options:
      maxHistory: 10
      timeout: 5m0s
    type: Helm
  drift:
    enabled: false
  namespace: ns-frontend
  placement:
    labels:{{$c := $glbCtx}}{{range $l, $cluster := $glbCtx.ClusterNames}}
      - key: rafay.dev/clusterName
        value: {{$envName}}-{{$project}}-{{ $cluster }}{{end}}

---

apiVersion: apps.k8smgmt.io/v3
kind: Workload
metadata:
  name: phpbb
  project: {{$envName}}-{{$project}}
spec:
  artifact:
    artifact:
      catalog: default-bitnami
      chartName: phpbb
      chartVersion: {{$glbCtx.PHPbbChartVersion}}
    options:
      maxHistory: 10
      timeout: 5m0s
    type: Helm
  drift:
    enabled: false
  namespace: ns-backend
  placement:
    labels:{{$c := $glbCtx}}{{range $l, $cluster := $glbCtx.ClusterNames}}
      - key: rafay.dev/clusterName
        value: {{$envName}}-{{$project}}-{{ $cluster }}{{end}}

---

apiVersion: apps.k8smgmt.io/v3
kind: Workload
metadata:
  name: mysql
  project: {{$envName}}-{{$project}}
spec:
  artifact:
    artifact:
      catalog: default-bitnami
      chartName: mysql
      chartVersion: {{$glbCtx.MySqlChartVersion}}
    options:
      maxHistory: 10
      timeout: 5m0s
    type: Helm
  drift:
    enabled: false
  namespace: ns-database
  placement:
    labels:{{$c := $glbCtx}}{{range $l, $cluster := $glbCtx.ClusterNames}}
      - key: rafay.dev/clusterName
        value: {{$envName}}-{{$project}}-{{ $cluster }}{{end}}

---
{{end}}

Users can create one or more workload(s) with the required configuration defined in the template file. Below is an example of a workload value file. This file helps to create workload with with the specified objects

NodeChartVersion: 19.0.2
PHPbbChartVersion: 12.2.16
MySqlChartVersion: 9.2.6

Important

Only the objects defined in the template must be present in the value files

Use the command below to create workload(s) with the specified configuration once the value file(s) are prepared with the necessary objects

 ./rctl apply -t workload.tmpl --values values.yaml

where, - workload.tmpl: template file - value.yaml: value file

Refer Templating for more details on Templating flags and examples