Skip to main content

Intel VROC RAID on Ubuntu

This document will present how to create and manage Intel VROC RAID on Ubuntu with mdadm utility(It should also work in other Linux).

For setting up Intel VROC RAID on Ubuntu in BIOS, go to see VROC Ubuntu Setup.

For creating software RAID on Ubuntu, go to see:

Ubuntu RAID.

Background

On the premise machine, there are 2 NVMe SSDs and 8 SATA hard drives(HDDs), and also it ships with a in-box hardware-assisted RAID controller(Intel VROC) on the Intel CPU, which is supposed to keep overall advantages over software RAID.

For me, I would like to use these 8 HDDs(sda, sdb, ..., sdh) to store data for long time, while retaining the balance between redundancy and performance. So here RAID 5(Stripping with Parity) comes into my mind.

To leverage the power of Intel VROC in Ubuntu(Linux), you also need the mdadm command line tool to manage intel VROC which support RAID 0, RAID 1, RAID 5 and RAID 10

note

In my understanding, the intel VROC register in system with the common interface with mdadm, so the mdadm software can operate it. And running command will show the mdadm is using intel VROC,

$ sudo mdadm --detail-platform

Platform : Intel(R) Virtual RAID on CPU
Version : 8.0.3.1002
RAID Levels : raid0 raid1 raid10 raid5
Chunk Sizes : 4k 8k 16k 32k 64k 128k
2TB volumes : supported
2TB disks : supported
Max Disks : 8
Max Volumes : 2 per array, 8 per controller
I/O Controller : /sys/devices/pci0000:00/0000:00:17.0 (SATA)
Port7 : /dev/sdh (ZR909K07)
Port6 : /dev/sdg (ZV70BN24)
Port3 : /dev/sdd (ZV70BD3T)
Port4 : /dev/sde (ZR909Q89)
Port1 : /dev/sdb (ZRT0S2FM)
Port5 : /dev/sdf (ZR9099MM)
Port2 : /dev/sdc (ZR909AGN)
Port0 : /dev/sda (ZV70BMH9)

Platform : Intel(R) Virtual RAID on CPU
Version : 8.0.3.1002
RAID Levels : raid0 raid1 raid10
Chunk Sizes : 4k 8k 16k 32k 64k 128k
2TB volumes : supported
2TB disks : supported
Max Disks : 96
Max Volumes : 2 per array, 24 per controller
3rd party NVMe : supported
I/O Controller : /sys/devices/pci0000:8d/0000:8d:00.5 (VMD)
NVMe under VMD : /dev/nvme0n1 (633FC084FCVK)
NVMe under VMD : /dev/nvme1n1 (633FC0DEFCVK)
I/O Controller : /sys/devices/pci0000:6f/0000:6f:00.5 (VMD)
I/O Controller : /sys/devices/pci0000:51/0000:51:00.5 (VMD)
info

Install Ubuntu Server on RAID: Ubuntu Server Image has inbox mdadm utilities and VMD drivers(which enable intel VROC functionalities), so it is quite convenient to create the RAID 1 on 2 SSDs either in BIOS stage(for intel VROC only) or in storage layer step during OS installation stage(software RAID), then install Ubuntu Server OS on the RAID 1.

After creating the RAID 1 via intel VROC in BIOS, Ubuntu Server installation can detect the RAID created by VROC in step when set up the storage layer.

If you skip BIOS to create RAID during OS installation, remember to add -e isms when using mdadm to create RAID(you can enter the terminal, do ``) otherwise the RAID is software based and does not apply VROC.

Install Ubuntu Desktop on RAID: Ubuntu Desk Image does not ship the mdadm tool, so it is nearly impossible to create RAID and install Ubuntu Desktop OS on the RAID(however this one Install Ubuntu 20.04 desktop with RAID 1 and LVM on machine with UEFI BIOS from stackoverflow seems to be successful)

Set up RAID 5 array

Here, I use 8 disks: /dev/sda, /dev/sdb, ..., /dev/sdh to create RAID 5 array and mount it for use in practice.

Create RAID array

When creating RAID array, Intel VROC is different with software RAID array creation as an additional container is needed to create firstly. Inside the container, some information is labelled into the drives for Intel VROC controller to recognize them.

  1. Create RAID Container with Intel IMSM Metadata

the total number of drives is 8 and -e imsm.

sudo mdadm --create /dev/md/imsm0 /dev/sd[a-h] -n 8 -e imsm
  1. Then, Create a RAID array in the /dev/md/imsm0 container using total 8 drives with RAID 5.
sudo mdadm --create /dev/md/md0 /dev/md/imsm0 -l 0 -n 2

Mount the RAID array for use

After you create the RAID array in above step, all partitions and data will be erased from all individual disks.

The RAID array is treated as a logical drive now.

  1. Create a ext4 filesystem on the RAID array
sudo mkfs.ext4 -F /dev/md/md0
  1. Mount the RAID array
sudo mkdir -p /mnt/md0

sudo mount /dev/md/md0 /mnt/md0

Save RAID array configuration

To make sure that the RAID array is reassembled and mounted automatically after reboot, we will have to add some necessary information into /etc/mdadm/mdadm.conf and /etc/fstab.

  1. Scan active array and append into /etc/mdadm/mdadm.conf file with following:
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
  1. Update initramfs, so the array will be available at early boot:
sudo update-initramfs -u
  1. Add mount options to /etc/fstab, you can use UUID=xxxx instead of the /dev/md0.
echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab

Remove RAID Array

[Optional] Umount the array from filesystem

Umount the array from filesystem if mounted,

sudo umount /dev/md/md0

Stop RAID container and array

# Stop RAID container
sudo mdadm --stop /dev/md/imsm0
# Stop RAID array
sudo mdadm --stop /dev/md/md0

# Stop all arrays and containers
sudo mdadm --stop --scan

Removes the RAID metadata

Removes the RAID metadata on each drive and resets the drive to normal

sudo mdadm --zero-superblock /dev/sda
sudo mdadm --zero-superblock /dev/sd[a-h]

[Optional] Remove RAID configuration

Remove mount information to the array if exist. Edit the /etc/fstab:

/etc/fstab
sudo nano /etc/fstab

Also, remove the array definition if exist, from the /etc/mdadm/mdadm.conf file:

/etc/mdadm/mdadm.conf
sudo nano /etc/mdadm/mdadm.conf

Manage RAID Array with mdadm

Find all RAID arrays

$ cat /proc/mdstat

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 nvme0n1[1] nvme1n1[0]
3800741888 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive nvme0n1[1](S) nvme1n1[0](S)
10402 blocks super external:imsm

unused devices: <none>

Query information on a RAID array

sudo mdadm --detail /dev/md0
sudo mdadm --query /dev/md0

Query information on a physical disk drive

sudo mdadm --query /dev/sda
sudo mdadm --examine /dev/sda

Stop a RAID array

sudo mdadm --stop /dev/md0
# Stop all arrays
sudo mdadm --stop --scan

Starting a RAID Array

# This works if the array is defined in the configuration `/etc/mdadm/mdadm.conf` file.
sudo mdadm --assemble --scan
sudo mdadm --assemble /dev/md0
# If the array is not persisted in `/etc/mdadm/mdadm.conf` file but keeping RAID metadata
sudo mdadm --assemble /dev/md0 /dev/sda /dev/sdb

Adding spare devices to a RAID Array

sudo mdadm /dev/md0 --add /dev/sde
$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0 squashfs 4.0 0 100% /snap/core20/1974
loop1 squashfs 4.0 0 100% /snap/lxd/24322
loop2 squashfs 4.0 0 100% /snap/snapd/19457
sda isw_raid_member 1.3.00
sdb isw_raid_member 1.3.00
sdc isw_raid_member 1.3.00
sdd isw_raid_member 1.3.00
sde isw_raid_member 1.3.00
sdf isw_raid_member 1.3.00
sdg isw_raid_member 1.3.00
sdh isw_raid_member 1.3.00
nvme0n1 isw_raid_member 1.3.00
├─md126
│ ├─md126p1 vfat FAT32 292B-DB66 1G 1% /boot/efi
│ └─md126p2 ext4 1.0 0f58386c-334d-4877-8051-b855bae37fb0 3.3T 0% /
└─md127
nvme1n1 isw_raid_member 1.3.00
├─md126
│ ├─md126p1 vfat FAT32 292B-DB66 1G 1% /boot/efi
│ └─md126p2 ext4 1.0 0f58386c-334d-4877-8051-b855bae37fb0 3.3T 0% /
└─md127
sudo fdisk -l /dev/sda

Delete partition and data in disk

sudo dd if=/dev/zero of=/dev/sda  bs=512  count=1