Skip to content

Part 4: Expand

What Will You Do

This is Part 4 of a multi-part, self-paced quick start exercise. In this part, you will add additional raw storage devices to the storage nodes and see that the managed storage addon automatically detects and provisions this storage as part of the Rook Ceph Cluster.


Confirm Storage Device

First, you will confirm the storage devices attached to your storage nodes.

  • Open a shell within each storage node in the cluster
  • Execute the following command to list all block devices

lsblk -f
In the below output, we can see two 1TB devices with names 'sdb' and 'sdc'. These devices have ceph filesystems and are currently being used by the Ceph cluster.

NAME                                                  FSTYPE         LABEL           UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                 squashfs                                                                    0   100% /snap/core18/2566
loop1                                                 squashfs                                                                    0   100% /snap/core18/2620
loop2                                                 squashfs                                                                    0   100% /snap/core20/1593
loop3                                                 squashfs                                                                    0   100% /snap/core20/1634
loop4                                                 squashfs                                                                    0   100% /snap/oracle-cloud-agent/42
loop5                                                 squashfs                                                                    0   100% /snap/snapd/16292
loop6                                                 squashfs                                                                    0   100% /snap/snapd/17336
loop7                                                 squashfs                                                                    0   100% /snap/lxd/22753
loop8                                                 squashfs                                                                    0   100% /snap/oracle-cloud-agent/46
sda
├─sda1                                                ext4           cloudimg-rootfs a51db32c-3f36-498b-aee6-e521a5e37fef     31.2G    31% /
├─sda14
└─sda15                                               vfat           UEFI            71F6-4A03                                99.2M     5% /boot/efi
sdb                                                   LVM2_member                    F4lDhz-gIMt-Fedz-YNVy-sbCv-cftJ-lr0Kuz
└─ceph--2c9e9cef--dc22--4a3d--98bd--95442eb88f12-osd--block--2a43f3c0--b82a--4161--811d--c32ce977a8a4
                                                      crypto_LUKS                    68011ad8-cd63-492a-9304-1e4beef5b2dc
  └─0snLMb-bagi-nXTu-19oC-wdjs-6jkI-Oq6dYr            ceph_bluestore
sdc                                                   LVM2_member                    HAbZsX-PaPh-Qg93-CRgK-YFuH-XKz6-cfY3Vx
└─ceph--356391aa--2432--4715--9bb8--c32b61ac2513-osd--block--155a3b0f--0a62--40b9--a26c--f0d5bb32ce1f
                                                      crypto_LUKS                    8df7451a-4f76-410f-9ba1-5f228baa516e
  └─0Y8TC6-wf4Y-2tIo-N64s-vc4R-U7cT-nPA3DA            ceph_bluestore

Add Storage Device

Next, you will add a raw storage device to the storage nodes.

  • Add a raw storage device to the storage nodes
  • Open a shell within each storage node in the cluster
  • Execute the following command to list all block devices

lsblk -f
In the below output, we can see a third 1TB device with name 'sdd' that was added.

NAME                                                  FSTYPE         LABEL           UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                 squashfs                                                                    0   100% /snap/core18/2566
loop1                                                 squashfs                                                                    0   100% /snap/core18/2620
loop2                                                 squashfs                                                                    0   100% /snap/core20/1593
loop3                                                 squashfs                                                                    0   100% /snap/core20/1634
loop4                                                 squashfs                                                                    0   100% /snap/oracle-cloud-agent/42
loop5                                                 squashfs                                                                    0   100% /snap/snapd/16292
loop6                                                 squashfs                                                                    0   100% /snap/snapd/17336
loop7                                                 squashfs                                                                    0   100% /snap/lxd/22753
loop8                                                 squashfs                                                                    0   100% /snap/oracle-cloud-agent/46
sda
├─sda1                                                ext4           cloudimg-rootfs a51db32c-3f36-498b-aee6-e521a5e37fef     31.3G    30% /
├─sda14
└─sda15                                               vfat           UEFI            71F6-4A03                                99.2M     5% /boot/efi
sdb                                                   LVM2_member                    F4lDhz-gIMt-Fedz-YNVy-sbCv-cftJ-lr0Kuz
└─ceph--2c9e9cef--dc22--4a3d--98bd--95442eb88f12-osd--block--2a43f3c0--b82a--4161--811d--c32ce977a8a4
                                                      crypto_LUKS                    68011ad8-cd63-492a-9304-1e4beef5b2dc
  └─0snLMb-bagi-nXTu-19oC-wdjs-6jkI-Oq6dYr            ceph_bluestore
sdc                                                   LVM2_member                    HAbZsX-PaPh-Qg93-CRgK-YFuH-XKz6-cfY3Vx
└─ceph--356391aa--2432--4715--9bb8--c32b61ac2513-osd--block--155a3b0f--0a62--40b9--a26c--f0d5bb32ce1f
                                                      crypto_LUKS                    8df7451a-4f76-410f-9ba1-5f228baa516e
  └─0Y8TC6-wf4Y-2tIo-N64s-vc4R-U7cT-nPA3DA            ceph_bluestore
sdd
  • Execute the following command to list all block devices again
lsblk -f

You will see that the added devices now have Ceph filesystems.

NAME                                                  FSTYPE         LABEL           UUID                                   FSAVAIL FSUSE% MOUNTPOINT
loop0                                                 squashfs                                                                    0   100% /snap/core18/2566
loop1                                                 squashfs                                                                    0   100% /snap/core18/2620
loop2                                                 squashfs                                                                    0   100% /snap/core20/1593
loop3                                                 squashfs                                                                    0   100% /snap/core20/1634
loop4                                                 squashfs                                                                    0   100% /snap/oracle-cloud-agent/42
loop5                                                 squashfs                                                                    0   100% /snap/snapd/16292
loop6                                                 squashfs                                                                    0   100% /snap/snapd/17336
loop7                                                 squashfs                                                                    0   100% /snap/lxd/22753
loop8                                                 squashfs                                                                    0   100% /snap/oracle-cloud-agent/46
sda
├─sda1                                                ext4           cloudimg-rootfs a51db32c-3f36-498b-aee6-e521a5e37fef     31.3G    30% /
├─sda14
└─sda15                                               vfat           UEFI            71F6-4A03                                99.2M     5% /boot/efi
sdb                                                   LVM2_member                    F4lDhz-gIMt-Fedz-YNVy-sbCv-cftJ-lr0Kuz
└─ceph--2c9e9cef--dc22--4a3d--98bd--95442eb88f12-osd--block--2a43f3c0--b82a--4161--811d--c32ce977a8a4
                                                      crypto_LUKS                    68011ad8-cd63-492a-9304-1e4beef5b2dc
  └─0snLMb-bagi-nXTu-19oC-wdjs-6jkI-Oq6dYr            ceph_bluestore
sdc                                                   LVM2_member                    HAbZsX-PaPh-Qg93-CRgK-YFuH-XKz6-cfY3Vx
└─ceph--356391aa--2432--4715--9bb8--c32b61ac2513-osd--block--155a3b0f--0a62--40b9--a26c--f0d5bb32ce1f
                                                      crypto_LUKS                    8df7451a-4f76-410f-9ba1-5f228baa516e
  └─0Y8TC6-wf4Y-2tIo-N64s-vc4R-U7cT-nPA3DA            ceph_bluestore
sdd                                                   LVM2_member                    7qepv6-Clzj-23g3-ODkk-3hwA-CLip-ydgT1c
└─ceph--a9c9dfd9--1068--472d--b2e8--4948e7d35a61-osd--block--68616281--2820--4abe--8901--8dd5b42a8a49

  └─Id2TgZ-13Eb-B9tv-syDS-Kqh7-DgAw-Unl9Tm
  • In the console, navigate to your project
  • Select Infrastructure -> Clusters
  • Click the cluster name on the cluster card
  • Click the "Resources" tab
  • Select "Pods" in the left hand pane
  • Select "rafay-infra" from the "Namespace" dropdown
  • Enter "rook-ceph-tools" into the search box

Validate Add-on

  • Click the "Actions" button
  • Select "Shell and Logs"
  • Click the "Exec" icon to open a shell into the container
  • Enter the following command in the shell to check the status of the Ceph cluster
ceph status

You will see that the storage capacity of the cluster has increased to 3.0 TiB now that the Ceph cluster is using the newly added storage device.

Validate Add-on


Recap

At this point, you have successfully expanded the storage capacity of your managed storage cluster.