Helm uses a packaging format called charts which is a collection of files that describe a set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
- Every chart must have a version number
- Kubernetes Helm uses version numbers as release markers
- Packages in repositories are identified by "name + version"
You can manage the lifecycle of Helm based workloads using the Web Console or RCTL CLI or REST APIs. We strongly recommend that customers automate this either
- By integrating RCTL with your existing CI system based automation pipeline, OR
- By leveraging the integrated GitOps capability.
The Controller supports both Helm 2 and 3. Users are strongly advised to use Helm 3. Support for Helm 2 support is deprecated and is only meant to be used for legacy charts that are incompatible with Helm 3.
With Helm 3, the Controller acts like a Helm 3 client. It does not have to parse and break down the chart down to its atomic k8s resources. Read more about Helm 2 End of Life.
- Login into the Console (typically as a Project Admin)
- Navigate to Applications -> Workloads
- Click on New Workload
- Provide a name, select Helm 3 for package type
- Select if you would like to "upload" the artifacts or you would like the Controller to "pull" it directly from a configured repository.
- Select the namespace where the resources should be deployed
Ensure that a namespace where you would like to deploy has already been created.
Option A: Pull from Repository¶
This approach requires the user to first select the type of repository (Git or Helm)
Then, in the next step
- Select the repository from the dropdown list
- Specify the name of the chart
- Optionally, Specify the chart version. If not specified, latest version will be pulled.
- Optionally, provide the override "values.yaml" file
Option B: Upload Artifacts¶
This approach is suited for scenarios where the user would like to upload the artifacts to the Controller and not provide any form of access to their Git/Helm repository.
- Upload the "Helm chart" (a tgz file) and if necessary a "values.yaml" override file
Multiple Values Files¶
It's possible to have multiple values files for the same helm chart. For Helm 3 workloads created either by Upload files manually or Pull files from Helm Repo, all these values files can be uploaded when creating the workload. They are processed and applied to the chart in the order they are uploaded.
- Click Add Files and upload the values files that you want to supply to the chart.
Advanced Helm Options¶
Support for advanced Helm options is available under "Helm Options".
In this step, specify the placement policy for the workload.
In the example below, the user has selected
- "Not Set" for Drift Detection
- The "Specific Clusters" placement policy
We are now ready to publish the workload. Click on Publish to start the deployment process
Depending on the complexity of the placement policy (e.g. multiple clusters), the Helm chart (e.g. lots of readiness process), the deployment process can take 30 seconds to a few minutes. If the deployment was successful, appropriate status is shown to the user.
Users are also presented with the list of workloads, their status etc on the main workload list page. Clicking on Deployment Status will show the current state of the workload by cluster.
To have the managed prometheus addon scrape pod metrics, add the following annotations to the pods with these parameters
metadata: annotations: prometheus.io/scrape: "true" prometheus.io/path: /metrics prometheus.io/port: "8080" spec:
Adjust the prometheus.io/path based on the URL from where the pod serves metrics. Set prometheus.io/port to the port from where the pod serves metrics. The values for prometheus.io/scrape and prometheus.io/port must be enclosed within double quotes
The collected metrics in Prometheus will be retained for 3 hours
If there are application related issues during deployment, both the application developer and/or the operations user will require RBAC secure facilities to remotely debug. Built in "secure, interactive capabilities" are provided to help streamline the debugging and troubleshooting process.
Developers that are responsible for a microservice or just the workload will not be provided. This persona is typically provisioned with a "Project Admin" role OR a "Project Read Only" role (in production environments).
- Navigate to the workload's publish screen or from the main workload list page, click on options
- Click on Debug
This will take the user to the "main debug page"
- For multi cluster deployments, select the cluster from the drop-down on the left
- A live channel is established to the remote cluster based on the zero trust control channel
Click on "KubeCTL" to launch a browser based Zero Trust KubeCTL shell. Note the KubeCTL operations this user can perform is access controlled and secured using the configured ROLE. All actions performed by the user are audited for compliance.
Click on "Show Events" to view the k8s events associated with the workload's resources.
Logs and Shell
Click on "Launch" under Logs & Shell to establish a "live, interactive" debug channel to the pod on the remote cluster.
This persona is generally an "Infrastructure Administrator" with privileged access to the Kubernetes Cluster.
Infrastructure admins are provided a bird's eye view of all workloads and their status on Kubernetes clusters. Click on the "Workloads" link on the cluster card.
In the example below, you can see the deployment status of our "apache-helm3" workload
K8s Resources By Workload
Infrastructure admins can view details about the k8s resources for a given workload on a cluster.
- Click on the Cluster Dashboard
- Click on Resources
- Select "Workloads" for "View By"
- Select the "workload name"
In the example below, you can view the details of all the k8s resources for our workload.
By Helm Release
- Click on the Cluster Dashboard
- Click on Resources
- Select "Helm Releases" for "View By"
In the example below, the Operations persona can see the Helm Chart's name, release, app version and other status information. If users have the Helm CLI configured to communicate with their cluster, they can use it to check status directly.
To unpublish the workload, click on the "unpublish button". The deployed resources on the remote clusters will be automatically removed.
If the remote cluster was offline when the unpublish operation was initiated, the Controller will send this instruction to the cluster when it reconnects.