Skip to content

Installation

Here are the detailed instructions for installation of the self hosted controller in AWS EC2


Preparation

  • Create an instance/node with the specifications described in Infrastructure Requirements above
  • Create wildcard DNS entries for the Controller domains mentioned in DNS Record Creation above, and point their A record to node/LB IP addresses
  • (Optional) Generate a wildcard certificate for the FQDN which is signed by a CA. Alternatively, configure the controller to use self-signed certificates

Install RADM

  • Click here to download the controller installation package to the instance

  • From your home directory, untar the package using the command below

tar -xf rafay-controller-v*.tar.gz

  • Copy and edit the config.yaml file
sudo mv ./radm /usr/bin/
cp -rp config.yaml-tmpl config.yaml
vi config.yaml
  • Customize the config.yaml
metadata.name: Name of the controller.
spec.networking.interface: Interface for controller traffic [optional]
spec.deployment.ha: True if its HA controller.
spec.repo.*.path: Path of the tar location
spec.app-config.generate-self-signed-certs: Generates and uses self  signed certs for incoming core traffic.
spec.star-domain: Wildcard FQDN (*.example.com)
spec.override-config.global.enable_hosted_dns_server: True if DNS is not available.
spec.app-config.logo: Display logo in UI.
spec.override-config.localprovisioner.basePath: Path for PVC volumes.
spec.override-config.core-registry-path: Path for registry images.
spec.override-config.etcd-path: Path where etcd data is saved.
spec.override-config.global.external_lb: set to True to use external LB.
spec.override-config.global.use_instance_role: set to True to provision EKS clusters using controller IAM role. Refer Section ***Controller instance IAM role to provision EKS clusters***
If the above instance role is not used, we can use the below parameter for adding cross account ID and credentials.
spec.override-config.global.secrets.aws_account_id
spec.override-config.global.secrets.aws_access_key_id  spec.override-config.global.secrets.aws_secret_access_key
  • Start initializing the Controller using the command shown below

sudo radm init --config config.yaml

  • Once initialization is complete, copy the admin config file to the home directory to access the kube controller API from CLI.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) -R $HOME/.kube
  • Install the dependencies which are required for the controller.

sudo radm dependency --config config.yaml

  • Install the controller application.

sudo radm application --config config.yaml

This will bring up all rafay services.

Note: This will take approx 20-30 mins for all pods to be up and ready.

  • Before proceeding further, confirm that all pods are in running state using kubectl

kubectl get pods -A


Accessing the Controller UI

  • Try accessing the self hosted controller UI https://console. to verify that the installation was successful

  • You should see a screen similar to the image below when you access the UI.

AWS EC2

  • Click on the “Sign Up” link to create the first organization of the self hosted controller
  • Register a new account for the organization as below screenshot

AWS EC2

  • Try to login to this organization with the newly registered account on the login screen

Upload Dependencies

Run the below command to enable support for Kubernetes cluster provisioning from the self hosted controller and upload dependencies for Kubernetes cluster provisioning to the controller.

sudo radm cluster --config config.yaml


Controller Setup For EKS Clusters

Perform the following if you would like to provision and manage Amazon EKS clusters using the self hosted controller.

Same AWS Account

Use the controller instance IAM role to provision EKS clusters in the same AWS account. The below steps are to use the controller instance role to provision EKS clusters in the same AWS account with the controller.

  • Enable the parameter global.use_instance_role: true in config.yaml to use this feature. If not enabled already, enable it in the config.yaml and rerun the radm application command

sudo radm application --config config.yaml

  • Create the below IAM policy for the controller EC2 instance to allow STS, PassRole and cloudformation access
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "sts:*",
            "Resource": "*"
        },
        {
            "Sid": "cloudformation",
            "Effect": "Allow",
            "Action": [
                "cloudformation:*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "iam",
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "*"
        }
    ]
}
  • Create an IAM Policy to use for provisioning EKS clusters with desired policy referred to EKS IAM Policy

  • After creating 2 policies, create a new IAM role for the controller EC2 instance to use for EKS cluster provisioning, choose use case EC2

External LB Setup

  • Under policies, choose the above created STS and EKS policies and attach to the role as below:

External LB Setup

  • Provide the role name and create the IAM Role

External LB Setup

  • After the role is created, edit the trust relationship of the IAM role for trusting the controller EC2 instances

External LB Setup

  • Edit the trust relationship of the IAM role by replacing the Principal with the controller EC2 instances “Account ID” and “Instance ID” in the below format.

arn:aws:sts::<accountid>:assumed-role/aws:ec2-instance/<instance id>

External LB Setup

Attach the IAM role to the EC2 instance of the controller. Now the controller is trusted for using the IAM instance role to provision EKS clusters in the same AWS account as the controller EC2 instance.

  • Follow the steps in this documentation link to create the Cloud Credentials in controller's console UI to use for EKS clusters provisioning.

Cloud Credentials, along with copying the role IAM’s ARN above to input to the “Role ARN” field

  • Follow the steps mentioned in Create Cluster for EKS cluster provisioning in the same AWS account with the controller using the above cloud credentials.

Different AWS Accounts

Use AWS Account ID, Access Key and Secret to provision EKS clusters in the same and different AWS accounts than the controller EC2 instances.

The below steps are to use the controller AWS Account ID, Access Key and Secret to provision EKS clusters in the same and different AWS accounts than the controller EC2 instances.

  • Disable the parameter global.use_instance_role: false in config.yaml to use this feature

If it is enabled, disable it in the config.yaml and rerun the radm application.

sudo radm application --config config.yaml

  • Access the operations console of the controller via https://ops-console. and use the super-user credentials in the config.yaml to login

  • After login, run the below command from your local using curl to update the AWS key and secret to the controller

CSRF tokens and RSA id can be obtained from the inspect screen in the browser after login. Fonts in the block letters should be replaced respectively

curl -X PUT 'https://ops-console.<controller.example.com>/edge/v1/providers/rx28oml/?partner_id=rx28oml&organization_id=rx28oml' \
 -H 'authority: ops-console.<controller.example.com>' \
 -H 'x-rafay-partner: rx28oml' \
 -H 'accept: application/json, text/plain, */*' \
 -H 'x-csrftoken: BuSAE3rVCGCwO45N8ne2nKyXiiR53ZL2xPNi6qk2MuVvKHytdH4nKGCtkZZHajN3' \
 -H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36' \
 -H 'content-type: application/json;charset=UTF-8' \
 -H 'origin: https://ops-console.<controller.example.com>' \
 -H 'sec-fetch-site: same-origin' \
 -H 'sec-fetch-mode: cors' \
 -H 'sec-fetch-dest: empty' \
 -H 'referer: https://ops-console.<controller.example.com>/' \
 -H 'accept-language: en' \
 -H 'cookie: logo_link=/logo.png; [email protected]; csrftoken=BuSAE3rVCGCwO45N8ne2nKyXiiR53ZL2xPNi6qk2MuVvKHytdH4nKGCtkZZHajN3; rsid=9zmsei147cok9qjqq2mkyxwqpps4ixr0' \
 --data-raw '{"id":"rx28oml","partner_id":"rx28oml","credentials":"{\"account_id\":\"790000000230\",\"access_id\":\"YOUR_AWS_ACCESS_KEY_ID\",\"secret_key\":\"YOUR_AWS_ACCESS_SECRET_KEY\"}","provider":1,"credential_type":0,"name": "default-partner-credentials-1","delegate_account":true, "created_at": "2021-09-25T15:08:03.356164Z"}' \
--compressed \
 --insecure
  • Follow the steps provided in EKS Credentials to create credentials to provision EKS clusters through controller console UI

  • Follow the steps mentioned in Cluster Provisioning for EKS cluster provisioning

Provision EKS cluster with custom AMI

Follow the steps mentioned in Cluster Provisioning for EKS cluster provisioning and under the customization screen add the custom AMI ID as shown below.

External LB Setup


Multiple Interface Support

The self hosted controller supports multiple interfaces and it can be set in the config.yaml file during the initialization. The selected interface is used for all connections related to the controller's apps and kubernetes. In default, the primary interface is used.

spec:
networking:
   interface: ens3

In cases where complete interface isolation is needed, few pods which use host networks like the monitoring/metrics pods, do not adhere to the interface selection on k8s layer and still use the default interface. If complete traffic isolation on the interface is needed, then we recommend to add the below routing rules on your controller and clusters.

ip route add 10.96.0.0/12 dev <secondary-interface>
ip route add 10.224.0.0/16 dev <secondary-interface>

Hosted DNS support

In the absence of DNS servers in controller and cluster env, the cluster does not have a way to communicate to the controller. In this case, the self hosted controller can host its own DNS server and propagate the records to the cluster.

Hosted DNS can be enabled on the config.yaml using the below flag in the controller.

override-config:
  global.enable_hosted_dns_server: true

For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP.

123.456.789.012 console.<controller.example.com>

While provisioning clusters, add the “ -d < controller-ip >” to the conjurer command.

tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="test"     \
-passphrase-file="passphrase.txt" -creds-file="credentials.pem" -dns-server
 < Controller-IP >