Skip to content

Add Storage

GlusterFS storage can be expanded for provisioned upstream k8s clusters. GlusterFS storage expansion can be done

  1. By Adding a new storage device to the existing storage node or
  2. By Adding a brand new node with a Storage Role.

Add New Storage Device To Node

  • Add the additional raw/unformatted volume/disk to your VMs or instances
  • Sign into the Web Console
  • Select the cluster and click on nodes
  • Click "Add Storage Device" on the node and wait for the newly added storage devices to be discovered by the controller

Add Storage Device

  • Select the new storage device to expand the GlusterFS storage to.

Add Storage Device

  • Click on "Save" to start expanding the storage. It will take couple minutes for the additional storage to be added.

  • Once it is completed, the new storage device will be shown under the node

Add Storage Device


Add New Node With Storage Role

  • Create required VMs or instances with the raw/unformatted volume attached to it.
  • Sign into the Web Console
  • Select the cluster and click on nodes
  • Click "Add Node" and Follow the Node Installation Instructions to install the bootstrap agent on the VM

Add Storage Node

  • Approve the node.

Users can optionally enable "auto approval" for new nodes to join the cluster. Enable the auto approval "toggle" in the cluster configuration page as shown in the screenshot below.

Auto Approve Nodes

  • Click on "Configure" to configure the node with the Storage Role and select the storage device and Save the settings.

Configure Storage Nodes

  • Click on "Provision" and confirm to start adding this node as a new node with storage role to the existing cluster

Confirm Storage Nodes

  • It will take couple minutes for the additional node to be provisioned.

Provision Storage Nodes

  • Once the node is provisioned, it will join the cluster with the Storage role

Complete Storage Nodes


Add rook/ceph Storage

Optionally, users can also add rook-ceph storage to the new nodes or the existing node by introducing a new device on Day 2, post upstream cluster provision. The rook-ceph version 1.8.1 is supported.

  • Select the required upstream cluster and click Nodes tab
  • Select the required node and click the down arrow to expand the Nodes section
  • Click Edit Labels button
  • Click Create Key-Value Label to add two labels for rook-ceph storage as shown in the below image
  • Key role and Value storage
  • Key storage and Value rook-ceph
  • Click Save

Provision Storage Nodes

  • The newly added storage labels appears as shown below

Storage Labels

Once these labels are added, user must update the blueprint to default-upstream to deploy the rook-ceph storage to the cluster. Refer Update Blueprint to know more about update blueprint process

Important

Storage expansion or update may get failed when adding a new device or a new storage node. To overcome this issue, restart the rook-operator pod.


Delete rook/ceph Storage

To delete the rook/ceph storage node from the HA or No-HA rook storage cluster, perform the below steps:

  • Run the below kubectl command
kubectl -n <ns-name> scale deployment rook-ceph-osd-<ID> --replicas=0
  • Execute into the rook-ceph-tools pod and run the below command
ceph osd down osd.<ID>

Caution

Deleting the storage node from No HA cluster could lead to an un-operational ceph cluster

  • To re-use the same node again in a different cluster, run the ./conjurer -d on top of the node. This conjurer script will remove all the encrypted LVM’s under your disk and some ceph directories, which are created during provisioning

Recommendation

Reboot the node once you run the conjurer script


Add OpenEBS Storage

OpenEBS local storage can be added using a Blueprint Add-On.

Create an OpenEBS Add-On

  • In the console, select a cluster.
  • In the menu, select Infrastructure > Add-Ons.
  • Select New Add-On, then select Create New Add-On from Catalog.
  • In the search field, enter OpenEBS, then select OpenEBS.
  • Click Create Add-On.
  • Enter a name for the Add-On, then select a Namespace.
  • Click Create.
  • Enter a version number. Optionally, enter a description.
  • Optionally, upload a values YAML file or select to override from a git repository.
  • Click Save Changes.

Add OpenEBS to a Blueprint

  • In the menu, select Infrastructure > Blueprints.
  • Create a new blueprint or select an existing blueprint. For an existing blueprint, you can create a new version.
  • Enter a version name. Optionally, enter a description.
  • To add OpenEBS, go to Add-Ons and click Add more.
  • Select the OpenEBS add-on from the drop-down list.
  • Select the OpenEBS version from the drop-down list.

OpenEBS Blueprint Add-on

  • Click Save Changes.
  • Update the cluster blueprint.