Skip to content

Part 2: Blueprint

What Will You Do

This is Part 2 of a multi-part, self-paced quick start exercise. In this part, you will configure a blueprint which contains the managed storage add-on and deploy it to a cluster.


Create Blueprint

First, you will create a blueprint which contains the managed storage add-on.

  • In the console, navigate to your project
  • Select Infrastructure -> Blueprints
  • Click "New Blueprint"
  • Enter a "Name" for the blueprint
  • Click "Save"

New Blueprint

  • Enter a "version name"
  • Select "Managed Storage" under the "Managed System Add-Ons" section
  • Click "Save Changes"

New Blueprint

Note

Admins can configure and enable storage either at cluster provisioning time or after the cluster has been provisioned and is in active use.


Apply Blueprint

Next, you will apply the blueprint to the cluster.

  • In the console, navigate to your project
  • Select Infrastructure -> Clusters
  • Click the gear icon on the cluster card
  • Select "Update Blueprint"
  • Select the newly created blueprint and version
  • Click "Save and Publish"

Apply Blueprint

The blueprint will begin to be applied to the cluster.

Apply Blueprint

After a few minutes, the cluster will be updated to the new blueprint.

Apply Blueprint

  • Click "Exit"

Validate Add-On

Now, you will validate the Rook Ceph managed system add-on is running on the cluster.

  • In the console, navigate to your project
  • Select Infrastructure -> Clusters
  • Click the cluster name on the cluster card
  • Click the "Resources" tab
  • Select "Pods" in the left hand pane
  • Select "rafay-infra" from the "Namespace" dropdown
  • Enter "rook-ceph-tools" into the search box

Validate Add-on

  • Click the "Actions" button
  • Select "Shell and Logs"
  • Click the "Exec" icon to open a shell into the container
  • Enter the following command in the shell to check the status of the Ceph cluster
ceph status

Validate Add-on

Now, you will confirm the nodes in the cluster have a the storage devices controlled by Ceph.

  • Open a shell within each node in the cluster
  • Execute the following command to list all block devices
lsblk -f

The output of the command should show the previosuly viewed raw devices now with a Ceph LVM.

NAME                                       FSTYPE        LABEL           UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                      squashfs                                                                   0   100% /snap/core18/2538
loop1                                      squashfs                                                                   0   100% /snap/core20/1593
loop2                                      squashfs                                                                   0   100% /snap/lxd/22753
loop3                                      squashfs                                                                   0   100% /snap/snapd/16292
loop4                                      squashfs                                                                   0   100% /snap/oracle-cloud-a
sda
├─sda1                                     ext4          cloudimg-rootfs a51db32c-3f36-498b-aee6-e521a5e37fef     33.6G    25% /
├─sda14
└─sda15                                    vfat          UEFI            71F6-4A03                                99.2M     5% /boot/efi
sdb                                        LVM2_member                   F4lDhz-gIMt-Fedz-YNVy-sbCv-cftJ-lr0Kuz
└─ceph--2c9e9cef--dc22--4a3d--98bd--95442eb88f12-osd--block--2a43f3c0--b82a--4161--811d--c32ce977a8a4
                                           crypto_LUKS                   68011ad8-cd63-492a-9304-1e4beef5b2dc
  └─0snLMb-bagi-nXTu-19oC-wdjs-6jkI-Oq6dYr ceph_bluestor
sdc                                        LVM2_member                   HAbZsX-PaPh-Qg93-CRgK-YFuH-XKz6-cfY3Vx
└─ceph--356391aa--2432--4715--9bb8--c32b61ac2513-osd--block--155a3b0f--0a62--40b9--a26c--f0d5bb32ce1f
                                           crypto_LUKS                   8df7451a-4f76-410f-9ba1-5f228baa516e
  └─0Y8TC6-wf4Y-2tIo-N64s-vc4R-U7cT-nPA3DA ceph_bluestor

Now, you will view the storageclasses created by Rook Ceph in the cluster

  • In the console, navigate to your project
  • Select Infrastructure -> Clusters
  • Click "kubectl" on the cluster card
  • Enter the following command
kubectl get storageclasses -A

You will see the three Rook Ceph storageclasses that were created. Each of classes maps to a particular type of storage being exposed (block, file and object).

Validate Add-on


Recap

At this point, you have the blueprint with the Rook Ceph managed system add-on configured and deployed to the cluster.