Skip to content


Install a single node Self Hosted Controller in Bare Metal/VM server environments.


  • Create an instance/node with the specifications described in Infrastructure Requirements
  • Create wildcard DNS entries for the Controller domains mentioned in DNS Record Creation above, and point their A record to node/LB IP addresses
  • (Optional) Generate a wildcard certificate for the FQDN which is signed by a CA. Alternatively, configure the controller to use self-signed certificates

Watch a video showcasing installation of the self-hosted controller in an air-gapped environment.

Install RADM

  • Click here to download the controller installation package to the Bare Metal/VM server.

  • From your home directory, untar the package using the command below.

tar -xf rafay-controller-v*.tar.gz

Example: tar -xf rafay-controller-v1.6-21.tar.gz

  • Move the RADM folder.
sudo mv ./radm /usr/bin/

Customize the config.yaml

  • Copy the config.yaml file.
cp -rp config.yaml-tmpl config.yaml
  • Edit the config.yaml file.
vi config.yaml

When modifying the config.yaml file, it is recommended to update the following settings. For spec.repo.*.path, there are multiple paths to update. Example: change /home/centos/ to /home/folder_name/. Name of the controller.
spec.networking.interface: Interface for controller traffic [optional]
spec.deployment.ha: True if its HA controller.
spec.repo.*.path: Path of the tar location. There are multiple paths to update. Generates and uses self  signed certs for incoming core traffic. Wildcard FQDN (* True if DNS is not available. Display logo in UI.
spec.override-config.localprovisioner.basePath: Path for PVC volumes.
spec.override-config.core-registry-path: Path for registry images.
spec.override-config.etcd-path: Path where etcd data is saved.

Note: The example above uses dots instead of line breaks to simplify the example. For example, looks like the following in the config.yaml file.


Multiple Interface Support

The controller supports multiple interfaces and it can be set in the config.yaml file during the initialization. The selected interface is used for all connections related to the controller application and Kubernetes. In default, the primary interface is used.

   interface: ens3

In cases where complete interface isolation is needed, few pods which use host networks like the monitoring/metrics pods, do not adhere to the interface selection on k8s layer and still use the default interface. If complete traffic isolation on the interface is needed, then we recommend to add the below routing rules on your controller and clusters.

ip route add dev <secondary-interface>
ip route add dev <secondary-interface>

Hosted DNS support

In the absence of DNS servers in the infrastructure and cluster environment, the managed clusters may not have a way to communicate with the self hosted controller. In this case, the self hosted controller can also host its own DNS server and propagate the records to the cluster.

Hosted DNS can be enabled on the config.yaml using the below flag in the controller.

  global.enable_hosted_dns_server: true

For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP. You can see the IP address after running the sudo radm init --config config.yaml command.

Example: 123.456.789.012 console.<>

While provisioning clusters, add the "-dns-server" to the conjurer command

tar -xjf conjurer-linux-amd64.tar.bz2 && sudo ./conjurer -edge-name="test"     \
-passphrase-file="passphrase.txt" -creds-file="credentials.pem" -dns-server <controller-IP>

Start the controller

  • Start initializing the Controller using the command shown below
sudo radm init --config config.yaml
  • Once initialization is complete, copy the admin config file to the home directory to access the kube controller API from CLI.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) -R $HOME/.kube
  • Install the dependencies which are required for the controller.
sudo radm dependency --config config.yaml
  • Install the controller application.
sudo radm application --config config.yaml

This will bring up all the controller services.

Note: This will take approx 20-30 mins for all pods to be up and ready.

  • Before proceeding further, confirm that all pods are in running state using kubectl.
kubectl get pods -A

Access Console

  • Try accessing the Controller UI https://console. to verify that the installation was successful.

  • You should see a screen similar to the image below when you access the console.

Note: For accessing the controller UI in your local machine, add a /etc/hosts entry pointing the console FQDN to your controller IP. You can see the IP address after running the sudo radm init --config config.yaml command. Example: 123.456.789.012 console.<>.


  • Click on the “Sign Up” link to create the first organization of the self hosted controller
  • Register a new account for the organization as below screenshot


  • Try to login to this organization with the newly registered account on the login screen

Upload Cluster Dependencies

Run the following command to upload dependencies for Kubernetes cluster provisioning to the controller.

sudo radm cluster --config config.yaml