Skip to content

Requirements

The Self Hosted Controller can be installed in a Google Cloud environment. This allows users to host and manage the controller in their own cloud environment.

The prerequisites for the self hosted controller are:

  • A tool or app for creating the GKE cluster. For example, a Google GCP virtual machine.
  • A GKE Kubernetes cluster.
  • A database. For example, a Google Postgres database.
  • A network attached storage system. For example, a Google Filestore.
  • A DNS for the controller.

Management VM for Installation

Setup a virtual machine in Google Cloud which will be leveraged for doing administration work to set up the controller in Google GKE.

VM Prerequisites

  • Operating System: CentOS 7
  • CPU: 4 cores
  • RAM: 8 GB
  • Storage: 500 GB

VM Setup

Create a Repo file

Copy and paste the following command in the node where you run radm commands. The tee command creates the google-cloud-sdk.repo file and displays the contents.

  1. In the Google Cloud console, search for and click on VM Instances.
  2. Open the SSH window for the virtual machine.
  3. Copy and paste the following command into the terminal.
  4. Press Enter.
sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
[google-cloud-sdk]
name=Google Cloud SDK
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOM

Install Google Cloud SDK

Run the following command to install the Google Cloud SDK.

Note

The installation may take some time.

sudo yum -y install google-cloud-sdk

Initialize GCloud

Run the following command as a normal user. Using the --console-only flag prevents the command from launching a web browser on the VM. A URL is created and used to authorize initializing GCloud.

  1. Run the init command.
gcloud init --console-only
  1. Select Log in with a new account. It should be option 2.

Note

The above command is for versions 379.0.0-1 and above. For earlier versions, use gcloud init.

  1. When asked if you want to continue, type y and press Enter.

  2. Copy the URL and paste it into a web browser.

  3. You will be asked to log in to a Google account. Use the Google account used for Google Cloud Platform.

  4. You will be asked to give Google Cloud SDK access to your Google account. Accept the conditions.

  5. Copy the Authorization Code, paste it into the terminal for the VM, then press Enter.

  6. Select the cloud project to use. This is the project with the Kubernetes cluster. Type in the project number, press Enter, then confirm the action.

  7. Optionally, select which Google Compute Engine zone to use. This is the zone with the Kubernetes cluster. Type in the zone number, then press Enter.

Initialize GCloud as Root

Run the following command as a root user.

  1. Run as a root user.
sudo su
  1. Run the init command in the VM SSH window.
gcloud init --console-only
  1. Select Log in with a new account. It should be option 2.

Note

The above command is for versions 379.0.0-1 and above. For earlier versions, use gcloud init.

  1. When asked if you want to continue, type y and press Enter.

  2. Copy the URL and paste it into a web browser.

  3. Log in to the Google account used for Google Cloud Platform.

  4. Accept the conditions.

  5. Copy the Authorization Code and paste it into the terminal for the VM.

  6. Select the cloud project to use. This is the project with the Kubernetes cluster. Type in the project number, press Enter, then confirm the action.

  7. Optionally, select which Google Compute Engine zone to use. Type in the zone number, then press Enter.

  8. Run the following command to exit as root user.

exit

Install Services

Run the following commands to install the GCloud services.

sudo yum -y install google-cloud-sdk-app-engine-go
sudo yum -y install kubectl
gcloud services enable file.googleapis.com
gcloud services enable sqladmin.googleapis.com

Add NFS Information

Run the following commands to add file server information to the VM. This file server will be created during the installation process.

FS=<nfs-fileserver name>
PROJECT=<project name>
ZONE=us-west2-a

Example:

FS=projectnfs
PROJECT=controller-358320
ZONE=us-central1-c

Install Helm

If helm is not installed, execute the following commands.

Note

OpenSSL is required to run ./get_helm.sh. Run sudo yum install openssl to install OpenSSL.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

DNS Record Creation

Installation of the self hosted controller requires DNS records as mentioned below. In the below examples, replace company.example.com with the desired domain. DNS records should point to the controller nodes’ IP addresses.

The following is an example

*.company.example.com

The following individual records should be allowed. For Google Cloud DNS, add these as Record Sets.

  1. api.<company.example.com>
  2. console.<company.example.com>
  3. fluentd-aggr.<company.example.com>
  4. ops-console.<company.example.com>
  5. rcr.<company.example.com>
  6. peering.<company.example.com>
  7. regauth.<company.example.com>
  8. *.core.<company.example.com>
  9. *.core-connector.<company.example.com>
  10. *.kubeapi-proxy.<company.example.com>
  11. *.user.<company.example.com>
  12. *.cdrelay.<company.example.com>
  13. ui.<company.example.com>

Logo (Optional)

Company logo of size less than 600 KB in png format for white labeling and branding purposes.


X.509 Certificates (Optional)

The controller uses TLS for secure communication. As a result, X.509 certificates are required to secure all endpoints. Customers are expected to provide a trusted CA signed wildcard certificate for the target DNS (e.g. *.company.example.com)

For non-prod/internal to org scenarios, If signed certificates are not available then the controller can generate self-signed certificates automatically. This can be achieved by setting the “generate-self-signed-certs” key to “True” in config.yaml during installation.


Email Addresses

The installation also requires below email addresses.

  • An email address for super user authentication to the controller’s admin
  • An email address for receiving support emails from the controller
  • An email address for receiving alerts and notifications (Optional)

Creation of GKE Cluster

Create New Project

  • In Google Cloud, click on the project name in the top menu bar. The Select a project window opens.
  • Click New Project.
  • Enter the project details.
  • Click Create.
  • Enable the Compute Engine API.

GKE

Create K8s cluster

  • Search for and select the Kubernetes Engine.
  • Click Create.
  • Note: If the Kubernetes Engine API is not enabled, click Enable.

GKE

Select GKE Standard

  • On the Create Cluster page, for the GKE Standard, click Configure.

Enter cluster details

  • Enter the basic details for the cluster.

Update Node Pool

  • Under Node Pool, click on a node pool name. The Node Pool details display.
  • Edit the Node Pool name, if necessary.

GKE

Update Node

  • Under the node pool name, click Nodes.
  • For the Image Type, select Container Optimized OS with containerd (cos_containerd). If necessary, confirm the selection.
  • For the Machine Type, select e2-standard-16.
  • For Boot Disk, set the size to 500GB.

GKE

Update Security

  • Under Node Pools, click on Security.
  • Select Allow full access to all Cloud APIs.
  • Disable Enable integrity monitoring (uncheck the box).

GKE

Update Network

  • Under Cluster, click on Networking.
  • Select Enable VPC-native traffic routing (uses alias IP) and Enable Kubernetes Network Policy.
  • Disable Enable HTTP load balancing.

GKE

Update Cluster Security

  • Under Cluster, click on Security.
  • Check the options Enable Shielded GKE Nodes and Enable Workload Identity

GKE

Update Cluster Features

  • Under Cluster, click on Features.
  • Deselect Enable Cloud Logging and Enable Cloud Monitoring (uncheck the boxes).
  • Select Enable Compute Engine Persistent Disk CSI Driver.

Finalize and Create

  • Click on Create button to create GKE cluster.

Configure gcloud

Execute below commands on the Google GCP VM to configure gcloud. This is the virtual machine you created at the beginning of this exercise.

sudo yum install google-cloud-sdk-gke-gcloud-auth-plugin
gcloud config set project ${PROJECT}
gcloud container clusters get-credentials <cluster name> --region <region> --project ${PROJECT}
cp ~/.kube/config gke-config.yaml

Example:

sudo yum install google-cloud-sdk-gke-gcloud-auth-plugin
gcloud config set project ${PROJECT}
gcloud container clusters get-credentials cluster-1 --region us-central1-c --project ${PROJECT}
cp ~/.kube/config gke-config.yaml


Create Postgres Database

Create Instance

  • In Google Cloud, search for and select SQL.
  • Click Create Instance.

Choose PostgresSQL

  • Click Choose PostgresSQL.

Enter instance information

  • Enter instance information. Update the settings based on your requirements.
  • (Optional) Selecting ‘Multiple Zone’ for HA is recommended.

GKE

Network Connection

  • Click on SHOW CONFIGURATION OPTIONS.
  • Enable Private IP under CONNECTIONS.
  • Select default under Network.
  • Click Set Up Connection. If necessary, enable the Networking API.
  • Select Use an automatically selected IP range under the Allocate IP range and click Continue.
  • Click Create Connection. This can take a few minutes.

GKE

Authorized Network

  • Click Add Network under Authorized networks.
  • Enter the name and the public IP of the node to run radm commands (the GCP VM). The Name field is not mandatory to have the exact node name.
  • Click Done and Create Instance.

GKE


Create a Filestore Instance

Execute the below commands in node where you run radm commands.

gcloud filestore instances create ${FS} --project=${PROJECT}   --zone=${ZONE} --tier=STANDARD --file-share=name="volumes",capacity=1TB   --network=name="default"

Show Filestore Instance

Run the following command to show metadata for a Filestore instance.

gcloud filestore instances describe ${FS} --location=${ZONE}

Set Filestore IP Address

Run the following command to set the Filestore instance IP address to FSADDR. FSADDR is used in another command.

FSADDR=$(gcloud filestore instances describe ${FS} \
--project=${PROJECT} \
--zone=${ZONE} \
--format="value(networks.ipAddresses[0])")

To check the FSADDR, run echo $FSADDR. The IP address for the filestore displays.

Configure Get-Value

Run the following command to set the ACCOUNT variable.

ACCOUNT=$(gcloud config get-value core/account)

Cluster Role

Run the following command to bind the kubectl commands to the cluster.

kubectl create clusterrolebinding core-cluster-admin-binding --user {ACCOUNT} --clusterrole cluster-admin

Install the NFS-Client Helm chart

Run the following commands to install the NFS-client helm chart.

  • Add the Helm repo.

    helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
    
  • Install the Helm chart.

    helm install nfs-cp nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=${FSADDR} --set nfs.path=/volumes --set storageClass.accessModes=ReadWriteMany -n nfs-client-provisioner --create-namespace --kubeconfig <config file from gke cluster> <gke-config.yaml>
    

    Example:

    helm install nfs-cp nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=${FSADDR} --set nfs.path=/volumes --set storageClass.accessModes=ReadWriteMany -n nfs-client-provisioner --create-namespace --kubeconfig gke-config.yaml
    


Setup Backup and Restore

Create a backup and restore process for disaster recovery of the Kubernetes cluster.

This example creates a Google Cloud Storage (GCS) bucket, a service account, and a role to use with Velero to backup the Kubernetes cluster.

Create the Bucket

Use the following commands to add the name of the bucket and then create the bucket.

BUCKET=<bucket_name>

gsutil mb gs://$BUCKET//

Create a Service Account

Use the following commands to create a Google Service Account for the backup.

gcloud config list

PROJECT_ID=$(gcloud config get-value project)

GSA_NAME=backup-server # Service Account name 
GSA_DISPLAY_NAME="Backup service account" # Service Account title 
SERVER_ROLE_TITLE="BackupServerRole" # Service Role Title (should not contain spaces)
SERVER_ROLE=backupServerRole # Server Role name 

gcloud iam service-accounts create $GSA_NAME --display-name "$GSA_DISPLAY_NAME" 

gcloud iam service-accounts list 

SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
  --filter="displayName:$GSA_DISPLAY_NAME" \
  --format 'value(email)')

Create a Custom Role for the Backup

Use the following commands to create a custom role, with the proper permissions, for the Backup account.

ROLE_PERMISSIONS=(compute.disks.get,compute.disks.create,compute.disks.createSnapshot,compute.snapshots.get,compute.snapshots.create,compute.snapshots.useReadOnly,compute.snapshots.delete,compute.zones.get,storage.objects.create,storage.objects.delete,storage.objects.get,storage.objects.list,iam.serviceAccounts.signBlob)

gcloud iam roles create $SERVER_ROLE \
    --project $PROJECT_ID \
    --title $SERVER_ROLE_TITLE \
    --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    --role projects/$PROJECT_ID/roles/$SERVER_ROLE

gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}

gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:admin gs://${BUCKET}

Bind Backup Service Account

Get the Service Account of the Backup created by RADM.

This example uses Velero for the backup process.

Get Service Account created by RADM

Use the following commands to get the Service Account of the backup created by RADM.

KSA_NAME=rafay-velero-sa # Do not change the name

NAMESPACE=velero # Do not change the name

Add IAM Policy Binding

Use the following commands to add an IAM Policy Binding to bind the Backup Kubernetes Service Account to a Google Cloud Service Account.

gcloud iam service-accounts add-iam-policy-binding \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/$KSA_NAME]" \
    [email protected]$PROJECT_ID.iam.gserviceaccount.com

Create Service Account for External DNS

Create a service account for the external DNS and bind it to the external DNS service account created by RADM in the Kubernetes cluster.

Note

After running the echo command, note the address. This will be used when installing the controller.

sa_name="test-external-dns-sa"
sa_display_name="test external dns sa"
gcloud iam service-accounts create $sa_name --display-name="$sa_display_name"

sa_email=$(gcloud iam service-accounts list --format='value(email)' --filter="displayName:$sa_display_name")

echo $sa_email

PROJECT_ID=$(gcloud config get-value project)

gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$sa_email" --role=roles/dns.admin

gcloud iam service-accounts add-iam-policy-binding "$sa_email"  --member="serviceAccount:$PROJECT_ID.svc.id.goog[kube-system/rafay-external-dns-sa]" --role=roles/iam.workloadIdentityUser

Example:

sa_name=test-sa-account
sa_display_name="test external dns sa"
gcloud iam service-accounts create $sa_name --display-name="$sa_display_name"

sa_email=$(gcloud iam service-accounts list --format='value(email)' --filter="displayName:$sa_display_name")

echo $sa_email

PROJECT_ID=$(gcloud config get-value project)

gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$sa_email" --role=roles/dns.admin

gcloud iam service-accounts add-iam-policy-binding "$sa_email"  --member="serviceAccount:$PROJECT_ID.svc.id.goog[kube-system/rafay-external-dns-sa]" --role=roles/iam.workloadIdentityUser