Skip to content


Here are the detailed instructions for Installation of the self hosted controller in Bare Metal/VM environments.


  • Create an instance/node with the specifications described in Infrastructure Requirements
  • Create wildcard DNS entries for the Controller domains mentioned in DNS Record Creation above, and point their A record to node/LB IP addresses
  • (Optional) Generate a wildcard certificate for the FQDN which is signed by a CA. Alternatively, configure the controller to use self-signed certificates

Install RADM

  • Click here to download the controller installation package to the instance

  • From your home directory, untar the package using the command below

tar -xf rafay-controller-v*.tar.gz

  • Copy and edit the config.yaml file
sudo mv ./radm /usr/bin/
cp -rp config.yaml-tmpl config.yaml
vi config.yaml
  • Customize the config.yaml **Name of the controller.
spec.networking.interface: Interface for controller traffic [optional]
spec.deployment.ha: True if its HA controller.
spec.repo.*.path: Path of the tar location Generates and uses self  signed certs for incoming core traffic. Wildcard FQDN (* True if DNS is not available. Display logo in UI.
spec.override-config.localprovisioner.basePath: Path for PVC volumes.
spec.override-config.core-registry-path: Path for registry images.
spec.override-config.etcd-path: Path where etcd data is saved.
  • Start initializing the Controller using the command shown below

sudo radm init --config config.yaml

  • Once initialization is complete, copy the admin config file to the home directory to access the kube controller API from CLI.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) -R $HOME/.kube
  • Install the dependencies which are required for the controller.

sudo radm dependency --config config.yaml

  • Install the controller application.

sudo radm application --config config.yaml

This will bring up all the controller services.

Note: This will take approx 20-30 mins for all pods to be up and ready.

  • Before proceeding further, confirm that all pods are in running state using kubectl.

kubectl get pods -A

Access Console

  • Try accessing the Controller UI https://console. to verify that the installation was successful

  • You should see a screen similar to the image below when you access the console


  • Click on the “Sign Up” link to create the first organization of the self hosted controller
  • Register a new account for the organization as below screenshot


  • Try to login to this organization with the newly registered account on the login screen

Upload Cluster Dependencies

Run the following command to upload dependencies for Kubernetes cluster provisioning to the controller.

sudo radm cluster --config config.yaml

Multiple Interface Support

The controller supports multiple interfaces and it can be set in the config.yaml file during the initialization. The selected interface is used for all connections related to the controller application and Kubernetes. In default, the primary interface is used.

   interface: ens3

In cases where complete interface isolation is needed, few pods which use host networks like the monitoring/metrics pods, do not adhere to the interface selection on k8s layer and still use the default interface. If complete traffic isolation on the interface is needed, then we recommend to add the below routing rules on your controller and clusters.

ip route add dev <secondary-interface>
ip route add dev <secondary-interface>

Hosted DNS support

In the absence of DNS servers in the infrastructure and cluster environment, the managed clusters may not have a way to communicate with the self hosted controller. In this case, the self hosted controller can also host its own DNS server and propagate the records to the cluster.

Hosted DNS can be enabled on the config.yaml using the below flag in the controller.

  global.enable_hosted_dns_server: true

For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP.

123.456.789.012 console.<>

While provisioning clusters, add the “ -dns ” to the conjurer command.

tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="test" \ -passphrase-file="passphrase.txt" -creds-file="credentials.pem" -dns-server controller-IP