Part 2: Deploy Workload
This is Part 2 of a multi-part, self paced quick start exercise.
What Will You Do¶
In part 2, we will "walk a mile" in the shoes of a developer/operator that only has access to the new project we created in part-1.
- Explore the integrated tooling for visibility and monitoring of the workload
- Remotely troubleshoot and diagnose the Kubernetes resources associated with the workload using the integrated browser based Zero Trust Kubectl
- You have already completed Part-1 of the exercise and have a functioning Amazon EKS Cluster you can use to deploy the workload.
The instructions describe the process using the web console. The same steps can be performed using the RCTL CLI for automation.
Step 1: Create Namespace¶
We plan to deploy our workload to a namespace on the EKS cluster. Since management of namespaces on clusters is a privileged, administrative operation, we need to be logged into the Org either with an Infrastructure Admin or Org Admin role.
- Login into your Org as an Org Admin or Infra Admin
- Navigate to Infrastructure -> Namespaces
- Create a new namespace with the name "nginx"
Step 2: Helm Repository¶
By default, every project in an Org comes with a number of default repositories. You can view these by following the steps below:
- Select Integrations -> Repositories
- The "default-bitnami" row is configured with details to retrieve Helm charts from Bitnami's public helm repository.
Step 3: Create Workload¶
In this step, we will create a new Nginx workload based on a Helm Chart from Bitnami.
- Navigate to Applications -> Workloads
- Click on "New Workload"
- Provide a name for the workload such as "nginx"
- Select Helm 3 for Package type
- Select "pull files from repository" for Artifact Sync
- Select "Helm" for repository type
- Select "nginx" for the namespace where the Helm chart will be deployed to.
Step 4: Workload Manifest¶
We are now ready to configure the workload with the helm chart details
- Select "default-bitnami" for the repository
- Enter "nginx" for the Chart Name
- Enter the chart version (in our case 8.8.3)
- We will use the default "values.yaml" file. This will create a Load Balancer in AWS.
- Click on Continue
You can view the chart version by viewing the chart.yaml file.
Step 4: Workload Placement¶
We will now specify a placement policy for the workload (i.e.rules for where, when and how). For this exercise, we will use the defaults
- Select "Specific Clusters" for the policy type
- Select the name of the cluster (in our case eks-dev)
- Click on "SAVE AND GO TO PUBLISH"
Step 5: Publish Workload¶
Publish the workload. The controller will then perform the following:
- Process the instructions and policy defined in the workload configuration
- Retrieve the Helm chart and values.yaml from the configured repository
- Identify the target downstream clusters and notify the Kubernetes Mgmt operator on the clusters to retrieve the manifests from the controller.
- In a few seconds, the helm chart will be deployed to the remote Amazon EKS Cluster, the Kubernetes cluster will download the container images from the configured container registry as the workload is made operational.
- Optionally, you can also access the workload
Step 6: Debug Workload¶
The workload owner in project is provided a number of built in facilities where they can remotely troubleshoot their Kubernetes workloads.
- Click on Debug to view the k8s resources associated with the workload
- Click on Kubectl to establish a web based, zero trust kubectl channel to the namespace on the EKS cluster where the workload is operating.
Step 7: Access Workload¶
In this case, the default values.yaml file would have created a LoadBalancer service on AWS.
- Retrieve the LoadBalancer Service's URL for the workload and copy it
- Open a web browser and paste the LoadBalancer URL. You should see something similar to the image below.
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.100.0.49 a46259a4d7036478bb7d69c2b5a95bb0-1585039972.us-west-1.elb.amazonaws.com 80:31959/TCP 110m
Congratulations! You just created and deployed a Helm chart based workload to a remote Amazon EKS cluster. You also experienced what the developer would see wrt. troubleshooting and debugging their workloads remotely.