Tag Archives: raid

RAID mania in my workstation

Recently I got a hold of 6 x Intel DC S3610 SSD drives that I wanted to play around with, and see what performance I could get out of them on my workstation PC at home.

To give you a little background on my use case I decided to write little bit about my setup and use case. My workstation is mainly used for graphics work and map making for a project I have www.iskort.is . Creating those maps takes up a lot of resources in all categories, I always need more RAM, more CPU power, and specially Disk performance and space. For example, recently I was working on a new 3D map data files for Iceland and the dataset when uncompressed was 6TB of 3D files, and the final 3D dataset that I saved out of this raw data was approx. 400GB of Lidar “like” data. My workstation is pretty beefy, but some projects I have to run on a VM on my server that has dual 6 core CPU’s, and 288GB of RAM where I have also around 20TB of storage.

My workstation is based on a Gigabyte GA-X99-UD4 motherboard. There I have 64GB of RAM and an Intel i7 5930K CPU (3.5 GHz, overclocked to 4.4 GHz). I have a GeForce 1080 graphics card and an older GeForce 980 card in the system as well as some of the workflow I use utilizes the CUDA cores on both GPU’s. For the OS and temp files I have been using a 512GB Samsung 950 Pro NVMe M.2 based drive. I had a Mushkin Scorpion deluxe 480GB PCIe based drive for my working dataset, but recently that good old card died on me.

So back to what I have now, 6 x Intel DC S3610 SSD! I wanted to find out the best configuration of those drives, find out what RAID levels would get me the best performance, and also to see if I should try out Storage Spaces that is in Windows 10. I don’t have a dedicated hardware RAID card in my setup, and for a long time I have used the Intel Chipset Rapid Storage Technology RAID (chipset/software raid) to do stripes of 2 drives or mirrors. So far that has worked great for me. But now with 6 drives I needed to see where my bottleneck would be. – Would it be the drives, the chipset or even my old, (but pretty powerful when overclocked) CPU.

Initially I decided to test RAID 0 and see how linear the performance would be when adding more than 2 drives. I used FIO with multiple files and threads to make sure I wouldn’t cap my results on single file or single thread on the CPU.


(click on images for larger version)

The graph shows Random Read IOPS. Here I saw a clear benefit of having 2 x drives in Raid 0, and a little more benefit of having 3 drives in Raid 0. – 4 or more drives resulted in worse performance except for 4k and 8k block size.

Same trend when doing random writes. – 3 Drives would give some performance boost over 2 drives, but 4-6 drives resulted in worse performance. I then went into the BIOS, change the SATA ports to AHCI mode instead of RAID mode, and tested out using Windows Disk management software RAID. I tested out “Simple” storage spaces profile as well, as that is also based on striping as RAID 0

Here I got more performance and more linear growth. Performance up to 6 drives was showing on up to 16k block size but little was added on block size of 32K after 4 drives and on 128k block size 3-6 drives gave the same performance. Storage spaces with “Simple” profile and 6 drives were a little behind the software raid level in 4-16k block size, but minimal difference were on larger block sizes.

Same goes for random writes. More linear performance in 4-8K, but after 3 drives in 32k almost no gain.

Looking at MB/Sec graphs I noticed there were an obvious bottleneck in the system at approx. 1600MB/sec read, and 1250MB/Sec write. No matter the block site or number of drives, I could not get more throughput out of the system. My finding on this is that the X99 Chipset is at its maximum there and basically with 3 SSD drives like the Intel S3610 drives, more than 3 drives would saturate the maximum throughput of the chipset.

When looking at the average numbers below it’s clear that software raid in Windows outperforms the Intel chipset raid. Especially in the lower block sizes, where my system could deliver approx. 230.000 IOPS at 4K. At that rate my CPU was 100% busy doing those IO’s and having more powerful CPU would probably get me some more IOPS.

Since I didn’t want to run Raid 0 in production, I tested out different RAID options and also the options in Storage spaces in Windows 10 to see what would give me the best performance but also reasonable level of protection in case of a drive failure.

When using the X99 chipset Raid levels, I was able to do Raid 5 with 6 drives, but with Raid 10 I was only able to use 4 drives. I also tested Mirror with 2 drives, and 3 sets of mirrors, and then creating a Raid 0 in Windows Disk manager, to emulate Raid 10 with 6 drives. With Windows Storage spaces I created a 6 disk “Parity” drive, and a 3 Disk Parity drive. A 6 x 2Way Mirror and a 6 x 3 Way Mirror, a 2 x 2Way mirror and a 5 x 3 Way mirror.

The useable space varies greatly on those options and of course when using fewer drives I could have to options to create more volumes

Storage Spaces options

Chipset options

Simple 3 x Parity 6x Parity 6 x 2way 6 x 3way 2 x 2way 5 x 3way Raid 5 x 6 Raid 10 x 4 Raid 1 x 2 Raid 10 x 3x2mirror
2,61 TB 888 GB 1,73 TB 1,29 TB 886 GB 447 GB 741 GB 2,2TB 894 GB 447 GB 1,3 TB

On those different RAID levels I got pretty good read performance over all but as expected 2 drive mirror did not hold up agains the Raid 5 or Raid 10 options.

When looking at writes Raid 5 took a huge write penalty and that option did not look very promising.

Turning over to Windows Storage spaces, I got more options to test.

Read performance was better than the Intel Raid options on the same number of drives as before.

On write performance, both Parity options were terrible, and approx. 10 time worse than the Intel Raid 5 option. 2 Way mirror with 6 drives looked good but less than half of the write performance in 4k than the Simple profile with no redundancy but still better than 6 drive Raid 0 option on the Intel chipset RAID.

Looking at MB/Sec values, most options were capping at 1550MB/sec as before as the Different Raid 0 options.

Looking at writes MB/Sec it’s obvious that the RAID 5 options had the worst performance and 6 x 2Way Mirror were just above the “Mixed” Raid 1 / 0 Mode where both Intel Raid 1 and Windows Raid 0 were bundled togeather.

Looking again at the average numbers

Windows storage spaces were showing better performance when using all 6 drives

Again 6 x 2 Way storage spaces gave me the best average write performance.

Overall, the performance of Windows Storage spaces was a nice surprise. Previously I didn’t gave it much though as I had been using Intel chipset Raid for 2 disk configuration with almost 2 x performance boost over 1 drive. Also what storage spaces allowed me to do, is to bundle all the 6 disks into one pool, but then carve volumes that had different protection, or no protection at all, and that is exactly what ended up doing.

I had my OS drive from the Samsung 950 Pro NVMe M.2 drive, I then created a 1.3TB volume with 2 way protection for my workflow data where I would work on the mapping data files. I created a 500GB volume with “Simple” profile, where I placed all temp files, Photoshop scratch disk file and my mapping software’s temp folder. I also have an 4TB SSHD “hybrid” drive from Seagate (ST4000DX002) for archives and other stuff like my drone flight videos or images from my DSLR camera.

I hope this blog was useful for you, especially if you had plans to do RAID with the Intel chipset with more than 2 SSD drives. My suggestion is to use Windows Storage Spaces instead.

Share if you like this blogpost or write a comment below.

FreeNAS 10.1 as a VM in vSphere 6.0

I wanted to write a blog about my FreeNAS installation. I’ve been testing out FreeNAS 9.3 and find it to be well suited for my home-lab.

Having read through several posts about not to run FreeNAS as a VM, and others blogs saying, “you can, but shouldn’t”, and some “yes you can, just make sure…” I wanted to try out for myself and find out if I could make a stable setup in my home-lab.

To start with, I have been running FreeNAS 9.3 on one of my physical hosts, booting from a USB. The setup was pretty stable, but I believe my cheap USB stick that I used for boot died as after a reboot yesterday the bios did not find the USB drive to boot from.

That gave me a reason to make some changes. I want to test out FreeNAS 10.1 that is available as a nightly build. I also didn’t want to run the FreeNAS setup on one of my physical hosts as the host is a Dell R710 server, Dual X5675 Xeon 3Ghz 6 Core CPU’s, having 288GB of ram, 6 x 2TB disks and a Dell H700 Controller. – The machine was a total overkill just to run FreeNAS for my other Dell R710 VMware host with same specs.

So, the main issue I have in regard of running FreeNAS that uses ZFS, is the fact that my server has the Dell H700 controller (LSI 2108 based), and that controller is unable to work in IT mode (IT Mode is a “non-Raid” mode, where each HDD is visible to the OS without creating Disk Volumes on the RAID controller)

ZFS wants to see the pure disks without a Raid controller, and some controllers can be installed with an IT mode firmware or have this natively as the Dell H200 controller. – I didn’t want to experiment with cross-flashing the controller I have with an original LSI firmware, and I’m not sure if that would enable me to run IT mode on the LSI 2108 chip anyway.

I decided to go ahead, and find a solution I could use, and what I found that was recommended was to present the SAS controller via PCI Pass-through to the FreeNAS VM, and as I wanted to use all me 6 2TB Drives for the ZFS system, and the R710 server I have has 6 3.5″ HDD bays, I had to find a way to create a datastore for the FreeNAS VM configuration files and boot disk. This option turned out to be a no-go for me as I only have one controller in my server and I can’t use PCI pass-through as well as have a datastore for the FreeNAS VM. I carried on though and went with the option to use RDM for the disks to the FreeNAS vm.

What I did was to carve out a 50GB volume of one of the 2TB disks when I created the Virtual Disk in the H700 controller.


I then created a secondary volume for the remaining space on the disk. What you have to note here, that you have to create the other VHD’s with the same size as the first VHD, as FreeNAS won’t be able to put different sized disks into the same ZFS raid volume. For each VHD, I set Read Policy to “no read ahead” and Write policy to “Write Though”

I still boot the ESXi from a USB so this 50GB volume should be free for running the FreeNAS VM and host the ESXi system logs

The next step is to install ESXi 6.0 on the new USB Stick, and that process is straight forward and I don’t want to spend this blog post on the whole ESXi installation process, but I wanted to share a screenshot to show how ESXi sees the volumes and the USB stick.

When the ESXi installation is finished and basic settings have be set for the host, I create a datastore on the 50GB volume and name it “FreeNAS-Boot”. I install FreeNAS 10.1 on this datastore like a normal VM.

I give this VM 2 vCPU’s, 64GB or RAM and a 20GB HDD. I select to reserve all guest memory for this VM, as the datastore does not have space to hold the RAM disk file.

At this stage, the FreeNAS VM has only the boot disk, and I install the system on this device.

When I initial installation is done and network settings, DNS and such has been set, I shut down the FreeNAS VM and add the storage network adapters.

For the storage network, I have prepared on the host 4 x iSCSI enabled VMkernel ports.

Each switch has a VMkernel port and a standard VM port group.

In my hosts I have 4x 1Gbit On-board Broadcom QLogic 5709 based NICs for iSCSI, and a PCI express dual port Intel I350 based adapter for management and VM traffic

Now I add 4 network adapters to the VM, each on the separate iSCSI portgroup

And on the FreeNAS side I set the IP address and subnet mask accordingly.

Now I have the network connections set up, and next step is to get those Virtual disk from the H700 Controller up to the FreeNAS VM. Before I start I shut down the FreeNAS vm.

There are 3 ways you can go about this

  1. Create a datastore for each of the volumes , and present a HDD to the FreeNAS vm
  2. Raw device map each volume up to the FreeNAS vm
  3. Use DirectPath IO and present the H700 controller up to the FreeNAS vm

I would like to go with option 3 on this, but as I have the 50GB datastore on the controller, it’s not free for DirectPath. To have a look how this option is, I went to the DirectPath I/O settings.

When you have selected OK, I get a warning:

And even though I select “Yes” and reboot, the setting defaults back to not having the H700 controller in Pass though mode. If I want to experiment with this option, I have to have a separate controller for the FreeNAS VM but the R710 server I have has a fixed backplane for 6 drives. I would have to add an SAS controller with external connections and have some external SATA or SAS disks.

Next option I wanted to try was to add the volumes as raw device mapped disk to the VM. This is by default disabled for local disks.

VMware’s KB 1017530 shows how you prepare the disks so they can be added as RDM disks using vmkfstools.

In my case this was the list of devices and commands to create the correct .vmdk pointer files.

The command “ls -l /vmfs/devices/disks/ ” gave me a list of devices:

Resulting in those commands to create the .vmdk pointer files.

Now I could add the disks to the FreeNAS vm as “Existing Hard Disk”

Now a big portion of my coworkers yell at me, “you told us never to use RDM’s!” – And from what I read on the FreeNAS forum, I would be hanged for this. – But remember this is a lab installation in my home and for most part I want to try things here and see if they are working or if they give me trouble…

Anyhow, – RDM’s are set up and the FreeNAS now has 6 drives to play with

I decided to create a Raid 10 volume. The FreeNAS GUI is not so clear about how to create a Raid 10, but the process is that you select “Mirror”, and then select 3 stripes of mirrors

This gives me the best performance and 5.3TB disk space for my VMs.

Next thing is to set up iSCSI and have it listen to the 4 interfaces I have set up. I’ll create a Part 2 of this entry for this at a later time.

Hope you find this post useful and if so, share.