Skip to content

CLI

Google Kubernetes Engine (GKE) is a fully managed Kubernetes service provided by Google Cloud. An integration is developed with GKE to ensure that users can provision GKE Clusters in any region and Google Cloud project using CLI (RCTL).


Create Cluster Via RCTL

Step 1: Cloud Credentials

Use the below command to create a GCP credential via RCTL

./rctl create credential gcp credentials-name <Location of credentials JSON File>

On successful creation, use this credential in the cluster config file to create a GKE cluster


Step 2: Create Cluster

Users can create the cluster based on a version controlled cluster spec that you can store in a Git repository. This enables users to develop automation for reproducible infrastructure.

./rctl apply  -f cluster-spec.yml

Here's an illustrative example of a YAML file for a regional GKE cluster, featuring the following components:

  • two (2) node pools
  • auto-upgrade enablement
  • node pool upgrade strategy set to Surge
apiVersion: infra.k8smgmt.io/v2
kind: Cluster
metadata:
    name: gke-cluster
    project: default-project
spec:
    blueprint:
        name: default
        version: latest
    cloudCredentials: gke-cred
    config:
        controlPlaneVersion: "1.22"
        location:
            region:
                region: us-east1
                zone: us-east1-b
            type: regional
        name: gke-cluster
        network:
            enableVPCNativeTraffic: true
            maxPodsPerNode: 75
            name: default
            networkAccess:
                privacy: public
            nodeSubnetName: default
        nodePools:
            - machineConfig:
                bootDiskSize: 100
                bootDiskType: pd-standard
                imageType: COS_CONTAINERD
                machineType: e2-medium
              name: default-nodepool
              nodeMetadata:
                gceInstanceMetadata:
                    - key: org-team
                      value: qe-cloud
                kubernetesLabels:
                    - key: nodepool-type
                      value: default-np
              nodeVersion: "1.22"
              size: 2
              management:
                autoUpgrade: true
              upgradeSettings:
                strategy: SURGE
                surgeSettings:
                  maxSurge: 0
                  maxUnavailable: 1
            - machineConfig:
                bootDiskSize: 60
                bootDiskType: pd-standard
                imageType: COS_CONTAINERD
                machineType: e2-medium
              name: pool2
              nodeMetadata:
                gceInstanceMetadata:
                    - key: org-team
                      value: qe-cloud
                kubernetesLabels:
                    - key: nodepool-type
                      value: nodepool2
              nodeVersion: "1.22"
              size: 2
        project: project1
        security:
            enableLegacyAuthorization: true
            enableWorkloadIdentity: true
    type: Gke

On successful provisioning, you can view the cluster details as shown below

Successfully provisioned GKE cluster

For more GKE cluster spec examples, refer here


Cluster Sharing

For cluster sharing, add a new block to the cluster config (Rafay Spec) as highlighted in the below config file

apiVersion: infra.k8smgmt.io/v2
kind: Cluster
metadata:
  labels:
    rafay.dev/clusterName: demo-gke-cluster
    rafay.dev/clusterType: gke
  name: demo-gke-cluster
  project: defaultproject
spec:
  blueprint:
    name: minimal
    version: latest
  cloudCredentials: demo-cred
  config:
    controlPlaneVersion: "1.24"
    location:
      type: zonal
      zone: us-west1-c
    name: demo-gke-cluster
    network:
      enableVPCNativeTraffic: true
      maxPodsPerNode: 110
      name: default
      networkAccess:
        privacy: public
      nodeSubnetName: default
    nodePools:
    - machineConfig:
        bootDiskSize: 100
        bootDiskType: pd-standard
        imageType: COS_CONTAINERD
        machineType: e2-standard-4
      name: default-nodepool
      nodeMetadata:
        nodeTaints:
        - effect: NoSchedule
          key: k1
      nodeVersion: "1.24"
      size: 3
    - machineConfig:
        bootDiskSize: 100
        bootDiskType: pd-standard
        imageType: COS_CONTAINERD
        machineType: e2-standard-4
      name: pool2
      nodeVersion: "1.24"
      size: 3
    project: dev-382813
  sharing:
    enabled: true
    projects:
    - name: "demoproject1"
    - name: "demoproject2"
  type: Gke

You can also use the wildcard operator "*" to share the cluster across projects

sharing:
    enabled: true
    projects:
    - name: "*"

Notes: When passing the wildcard operator, users cannot pass other projects name

To remove any cluster sharing from the project(s), remove that specific project name(s) and run the apply command


List Clusters

To retrieve a specific GKE cluster, use the below command

./rctl get cluster <gkecluster_name>

Output

./rctl get cluster demo-gkecluster
+------------------------+-----------+-----------+---------------------------+
| NAME                   | TYPE      | OWNERSHIP | PROVISION STATUS          |
+------------------------+-----------+-----------+---------------------------+
| demo-gkecluster        | gke       | self      | INFRA_CREATION_INPROGRESS |
+------------------------+-----------+-----------+---------------------------+

To retrieve a specific v3 cluster details, use the below command

./rctl get cluster demo-gkecluster --v3

Example

./rctl get cluster demo-gkecluster --v3
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| NAME                   | CREATED AT                    | OWNERSHIP | TYPE     | BLUEPRINT | PROVISION STATUS          |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+
| demo-gkecluster        | 2023-06-05 10:54:08 +0000 UTC | self      | gke      | minimal   | INFRA_CREATION_INPROGRESS |
+------------------------+-------------------------------+-----------+----------+-----------+---------------------------+

To view the entire v3 cluster config spec, use the below command

./rctl get cluster <gkecluster_name> --v3 -o json

(or)

./rctl get cluster <gkecluster_name> --v3 -o yaml

Download Cluster Config

Use the below command to download the v3 Cluster Config file

./rctl get cluster config <cluster-name> --v3

Important

Download the cluster configuration only after the cluster is completely provisioned


Node Pool Management

To add/edit/scale/upgrade/delete node pool(s), make the required changes in the GKE Cluster config spec and run the apply command


Delete Cluster

Delete cluster will clean up the resources in Google Cloud as well

./rctl delete cluster <cluster_name>