We have a server with a bunch of hard disks that need to be combined to create one large filesystem. Further to that, we want some redundancy so we’ll need RAID. In our case, we’ll be using RAID 6. In this walk-through, we’re using a CentOS 7 server with 7 physical hard disks. We’re going to create a RAID 6 so the result will be 50TB’s of disk space minus a little overhead.
But first a little context: The server has 8 physical hard disks. The first disks is allocated to the operating system (CentOS 7) while the remaining 7 will be used in RAID 6 (5 disks for data and 2 for parity). This walk-through uses the remaining 7 disks to create an LVM RAID 6 volume.
See the Reddit topic for a running commentary on this.
Throughout this walk-through, we’ll be using the name “storageVolumeLMV” for the logical volume, and “storage” for the volume group.
For more information about LVM and the terminaology, please read “https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/”. For more information about LVM RAID, please read “https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid_volumes”.
First we need to tell LVM which physical hard disks we intend to use:
[root@server ~]# pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh Physical volume "/dev/sdb" successfully created. Physical volume "/dev/sdc" successfully created. Physical volume "/dev/sdd" successfully created. Physical volume "/dev/sde" successfully created. Physical volume "/dev/sdf" successfully created. Physical volume "/dev/sdg" successfully created. Physical volume "/dev/sdh" successfully created.
Run “pvscan” to show what we have so far:
[root@server ~]# pvscan PV /dev/sda3 VG centos lvm2 [94.07 GiB / 4.00 MiB free] PV /dev/sdc lvm2 [<9.10 TiB] PV /dev/sdb lvm2 [<9.10 TiB] PV /dev/sdf lvm2 [<9.10 TiB] PV /dev/sde lvm2 [<9.10 TiB] PV /dev/sdd lvm2 [<9.10 TiB] PV /dev/sdh lvm2 [<9.10 TiB] PV /dev/sdg lvm2 [<9.10 TiB]
Notice the first disk in the list above. It's the primary hard disk that the OS is on. We'll be ignoring that for now.
Now we create the new volume group:
[root@server ~]# vgcreate storage /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh Volume group "storage" successfully created
Again, let's see what we've got so far:
[root@server ~]# vgs VG #PV #LV #SN Attr VSize VFree centos 1 2 0 wz--n- 94.07g 4.00m storage 7 0 0 wz--n- <63.67t <63.67t
We can see in the above output that we have a new volume group.
Now we need to create the new logical volume as a RAID 6:
[root@server ~]# lvcreate --type raid6 -i 5 -l 100%FREE -n storageVolumeLMV storage Using default stripesize 64.00 KiB. Rounding size (16690681 extents) down to stripe boundary size (16690680 extents) Logical volume "storageVolumeLMV" created.
The above shows the result of creating a new logical volume in RAID 6, we've specified 5 disks automatically allowing 2 for the parity, we're using all available disk space, where the name of the new logical volume is "storageVolumeLMV" and it's applying this to the volume group "storage".
Let's have a look again at what we have so far:
[root@server ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root centos -wi-ao---- 93.13g swap centos -wi-ao---- 956.00m storageVolumeLMV storage rwi-a-r--- <45.48t 0.00
Now we need to format the filesystem. We'll use XFS:
[root@server ~]# mkfs.xfs -d su=64k,sw=5 /dev/storage/storageVolumeLMV mkfs.xfs: Specified data stripe width 512 is not the same as the volume stripe width 640 meta-data=/dev/storage/storageVolumeLMV isize=512 agcount=46, agsize=268435440 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=12208035840, imaxpct=5 = sunit=16 swidth=64 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
Then go ahead and mount it. I've added the following line to the "/etc/fstab":
/dev/storage/storageVolumeLMV /mnt/storage xfs defaults 0 0
And make sure the mount point exists:
[root@server ~]# mkdir /mnt/storage
You should be left with something like this:
[root@server ~]# mount /mnt/storage [root@server ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 94G 1.1G 93G 2% / devtmpfs 7.5G 0 7.5G 0% /dev tmpfs 7.6G 0 7.6G 0% /dev/shm tmpfs 7.6G 11M 7.6G 1% /run tmpfs 7.6G 0 7.6G 0% /sys/fs/cgroup /dev/sda2 947M 146M 802M 16% /boot tmpfs 1.6G 0 1.6G 0% /run/user/0 tmpfs 1.6G 0 1.6G 0% /run/user/1000 /dev/mapper/storage-storageVolumeLMV 46T 35M 46T 1% /mnt/storage
Thanks, this helped a lot, and everything seems to work. However, there doesn’t seem to be anything connecting it to the mdadm, but shouldn’t it be? Isn’t that what needs to manage the raid arrays?