Installing Ubuntu on a system with software RAID

I thought I’d document my experiences installing Ubuntu on a software raid with large (>2Tb) disks because, as usual, I couldn’t find anything that exactly described my situation.

I have a system with a ASUS P9X79 WS Motherboard and two 3Tb hardisks (ST3000DM001-1CH166). The system came configured with Windows7 and the two disks configured with RAID1 (mirrored so that data is backed up across both disks) using the Intel Rapid Storage Technology (RST) that is already present in the BIOS (this can be accessed by holding down Ctrl-I while booting).

I naiively thought this meant I could just pop the Ubuntu CD in, it would see the RAID1 array as a single disk so that I could just install as usual, but would have the benefits of RAID.

As ever, things are more complicated. Intel’s RST RAID is “fake raid”, and so not entirely in hardware – it needs to use the CPU and so needs a complicit operating system with the relevant drivers installed. Therefore when I tried installing from the Ubuntu CD, it saw two separate disks, not the single raid array.

It seems that Linux does have the drivers to work with RST, but everything I’ve read suggests that it’s not advisable to go down this route and that it’s safer and better supported to use Linux’s own software raid.

Terminology and un-decombobulating…
GPT – this is the GUID Partition Table – a way of describing how the disk is split up and where the various bits start/stop. It’s the replacement for the old MBR partition table and is what you will need if you have disks over 2Tb in size. The standard is part of UEFI (see below), but GPT can be used with BIOS systems that don’t support UEFI.

UEFI (aka EFI) – this is the replacement for the legacy BIOS. In the time-honoured tradition of creating the maximum possible confusion for anyone trying to understand computer technology, these three terms (UEFI, EFI and BIOS) are used interchangably to refer to the same and/or different things. As I understand it UEFI is the replacement for EFI, which in turn was meant to be the replacement for the BIOS. However as most computer users call the thing that appears before the operating system the BIOS, UEFI is called a BIOS and UEFI seems to be frequently shortened to EFI. Clear?

To add to the confusion, it seems that many systems can be run either in UEFI or BIOS (legacy) mode, so you need to check to see which yours is doing.

In order to do this, I had to go through the following steps:

1. Disable the existing software raid

  • Hold down Ctrl-I on boot
  • Select option 3 “Reset Disks to Non-RAID, use the space bar to select both disks
  • Hit Enter.

2. Boot from the Ubuntu install CD to provide a linux environment

  • Go into BIOS and select the UEFI option with the CD (you can tell if you’re in efi mode as there will be a /sys/firmware/efi directory)

3. partition the first disk.

EFI system partition (ESP)
In order to decide what to do, you will first need to determine if your motherboard uses the older BIOS (in which case you will need a ~1Mb BIOS boot partition) or UEFI. My motherboard is a ASUS P9X79 WS, which uses UEFI. I therefore don’t need a BIOS boot partition, but it seems I do need an ESP.

It seems that the Windows install I had on my disk had created an ESP starting at 1049kb and ending at 106Mb. I’m guessing that this is so there is some room for a BIOS boot partition, so even though I don’t need this, I’ll adhere to that convention.

The old rule used to be that you used twice as much swap space as you had RAM. For modern system with oodles of RAM (I have 16GB), the recommendation seems to be RAM/2, so I’m going for an 8Gb swap partition

root /data
In order to make backups easier, I want to separate my data from the OS, so I’m therefore going to create a root (/) partition to hold all the OS files, and a data partition to hold everything that needs to be backed up externally.

It seems that 100Gb should be ample for the basis OS stuff, if all data is kept a separate partition.

The disk utility program (or cat /proc/partitions or lshw -class disk, or parted -l) tell me that my disks are /dev/sda and /dev/sdb.

(commands apdated from:

The commands I used to create the partitions on the first disk were:

# start parted as root
sudo parted /dev/sda
# create GPT partition table
mklabel gpt
# create ESP partition
mkpart primary fat32 1049kb 106Mb
# Set its boot flag
set 1 boot on
# create 8Gb swap partition (actually less than 8 as I’ll ignore the 106mb I used for the boot partition)
mkpart primary linux-swap 106Mb 8Gb
# Create 100Gb ext4 system partition
mkpart primary ext4 8Gb 108Gb
# Create data partition (get disk size from hitting “p” in parted to print disk info)
mkpart primary ext4 108Gb 3001Gb

With this thinking and a 3Tb disk, I’ve ended up with the following partition strategy
4. Copy the partition table from the first disk (sda) to the second (sdb)

In a sane world, you could use sgdisk to do this ( However it isn’t on the Ubuntu installation CD and “apt-get install gdisk” (of sgdisk) couldn’t find it so I just copied the commands for sda on sdb.

NB: In a truely sane world, you would be able to use parted to do that, but lets not go completely crazy.

5. Set up the software raid


To do this I used mdadm, which I installed with:
sudo apt-get install mdadm
(just select the local option when you get the postfix stuff)
To make sure you’ve got the md drivers installed, check you get some output from “cat /proc/mdstat”

unused devices; <none>

The next step is to create the volumes. It doesn’t make sense to have swap on raid (in a mirrored configuration), nor does it seem that putting the boot partition on raid is possible, so I’ll just create volumes for the root (md0) and data (md1) partitions.

The following commands should do that, adding partitions starting at md1 (NB: cat /proc/mdstat will give you information about the progress of creating the arrays):
sudo mdadm –create –verbose /dev/md0 –level=mirror –raid-devices=2 /dev/sda3 /dev/sdb3
sudo mdadm –create –verbose /dev/md1 –level=mirror –raid-devices=2 /dev/sda34/dev/sdb4

(This last step will take several hours…).

Not sure if this step is required, but add the information about the arrays to the madm.conf file:
sudo mdadm –detail –scan >> /etc/mdadm/mdadm.conf
(The above wouldn’t work due to permission errors, so I had to edit the file manually with vi).

6. Create the filesystems
With the raid working, now is the time to create the filesystems on the disks and set the swap partitions

sudo mkfs.vfat -F 32 /dev/sda1
sudo mkfs.vfat -F 32 /dev/sdb1
sudo mfs.ext4 /dev/md1
sudo mkfs.ext4 /dev/md2

# Set the swap partitions on the two disks
sudo mkswap /dev/sda2
sudo mkswap /dev/sdb2

7. Run the installer and select the devices
I ran the installer from the desktop, went into the “something else” option when it asked about partitioning and then “Changed” things to use md0 as / and md1 as /home.

The installer ran, but failed with:
Executing ‘grub-install-dummy’ failed.
This is a fatal error.

The installer then crashed trying to send a bug report. Typical.

Therefore needed to install grub manually. Helpful info on: and and .

I tried with the default grub that came with ubtuntu but couldn’t get it to work (e.g. errors like grub-mkimage: error: invalid ELF header.).

I therefore had to build grub from source and install it myself.

# Set things up on the disk
sudo mount -t vfat -o rw,users /dev/sda1 /mnt
# load dm-mod module – not sure if required but…
sudo modprob dm-mod
sudo mkdir -p /mnt/efi/grub
#sudo mkdir -p /mnt/EFI/ubuntu

Download grub source code from:

Download all packages requried for the build:
sudo apt-get install bison libopts25 libselinux1-dev autogen m4 autoconf help2man libopts25-dev flex libfont-freetype-perl automake autotools-dev libfreetype6-dev texinfo

(NB: This gave errors e.g “debconf: DbDriver “config”: /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable.

The grub build failed intially, so I ran the apt-get command a couple of times. This fixed some problems but it seems it also requires libtool, so I installed that too.

unpack grub, cd into it, then:
./configure –with-platform=efi –target=x86_64 –program-prefix=””
make > make.log 2>&1 &

The modules then reside in the grub-core directory, so we cd into that and run:
../grub-mkimage -d . -o grub.efi -O x86_64-efi -p “” `find *.mod | xargs | sed -e ‘s/.mod//g’`

The find command includes all the module we need. This creates the grub.efi executable, so we copy this and all the modules/lst files to the disk:

sudo cp grub.efi *.mod *.lst /mnt/efi/grub

Now need to copy the original ubuntu grub config file to that directory, so mount the root filesystem and copy the file from there:
sudo mount /dev/md0 /media
sudo cp /media/boot/grub/grub.cfg /mnt/efi/grub

Finally we need to tell the UEFI to use our new grub as the boot loader:

# unmount directory
sudo umount /mnt

# install efibootmgr
sudo apt-get install efibootmgr

# load efivars module
sudo modprobe efivars

sudo efibootmgr –create –gpt –disk /dev/sda –part 1 –write-signature –label “GRUB2” –loader “\\efi\\grub\\grub.efi”

This printed that the BootOrder was 0000,0001 and that Boot0000* was grub2.

Rebooted and prayed…

At this point the system rebooted, but just kept going to the UEFI (BIOS) screen. Pressing F8 only gave an option to enter setup.

I therefore rebooted from the installation disk and installed disk-repair ( This fixed the problem, but with CSM enabled I still got no boot menu. I therefore had to enable CSM compatibility mode to get the machine to boot.


2 responses to “Installing Ubuntu on a system with software RAID

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s