Tag Archives: homelab

Storage in the home lab.

Home Labs in general

When asking my colleagues what to run as a storage platform in my home-lab, I got an honest question from a fellow blogger and vExpert Rasmus Haslund (@haslund)

What are your requirements, challenges and constrains??
My answer: “Well, I want all the features and best performance, but I have limited or no budget!”

This could easily be applied to your production setup where you have the challenge of providing a stable service level, while having limited budget on external storage. So if you work for a small/medium company looking for a storage solution for your virtual workloads, read on and hopefully you can apply the solution described in this blogpost to your installation.

The challenge

As a vExpert, blogger and enthusiast for all sorts of storage and virtualization solutions, I find it necessary to have a lab at home to do tests and evaluate different solutions. I also run several vm’s for my home network that I have to take care of and have to answer to my son and wife if I screw up!

For quite some time I had a limited flexibility in regard of the lab and to maintain some level of service for my home network I had to find a better solution.

My son has a Minecraft server running that need to be up in the evenings specially, and my wife’s ideas about SLA for her e-mail and picture library in this regard is that a 100% uptime is “normal”!! So it’s tough ground to maintain and also have flexibility when it comes to testing and running some ad-hoc workloads.

In my basement there is a storage space and after I got a networking cable down there from my apartment on 2th floor, I could start up more hardware without my family being disturbed by noise and cables running all over my desk. Down there I can maintain a stable setup for my home network and have some extra hardware to play around with when I need to try out something.

When I got the chance to repurpose some servers from work I decided to redesign the home lab. It had been running from a one ESXi white-box host with 1 x Intel I7 3770K CPU and 32GB RAM and surly could befit of more CPU and RAM resources.

To set out some requirements and figure out the challenges.

The goals

  1. Maintain reasonable level of uptime and performance of my home network.
  2. Have available disk space and resources to set up a nested ESXi environment for testing different setups and solutions without exposing the home network to risk.
  3. Have a storage solution to be accessible by my 2 ESXi hosts.
  4. Minimizing heat generation and electricity cost for running the home network, but still have the ability to spin up more workloads for testing in the lab when needed.

The hardware

The servers I got for the lab are pretty massive!

3 x Dell PowerEdge R710, each having dual X5675 3,0Ghz CPU’s and 288GB of RAM. Each server has 4 x 1Gbit network cards onboard, and 1 x dual port 1Gbit NIC. Each server has the Dell H700 SAS controller (LSI based controller)

The solution

When looking for a storage solution I decided to use one of the R710 machines as an iSCSI target device as it had 6 x 3.5” drive bays. There I could place my 6 x 2TB SATA drives I previously had in my white-box server. This R710 server would become the shared storage for the 2 ESXi hosts as well as being a proxy server for my Veeam Backup installation, a Minecraft server for my son and a PLEX media server for my home entertainment system. (All those workloads that had been running on my wife’s desktop for some time, much to her enjoyment as you can believe) On one of the ESXi hosts I would run my home network workloads, but have the option to turn on ESXi host 2, and for lab testing.

I looked at several options, both Linux and windows based, virtual and non-virtual, that would enable me to run both the NAS iSCSI workload, but also the Veeam proxy, PLEX and Minecraft service. The setup I found most appealing for testing the different RAID levels and was a non-virtual windows based Starwind Virtual SAN solution

The main reason for running the workload in a non-virtualized Windows installation, was the fact that this enabled me test different IO and cache policies on the physical volume used as an iSCSI target. On native windows I could use the LSI MegaRaid Storage Manager to create and destroy volumes without having to reboot the server.

At a later stage I might run ESXi on this host, reducing the footprint down to 2 physical R710 machines using Starwind 2 node cluster setup.

Features of the Starwind SAN solution that I found interesting

Main Product page and Free Product Page

There are several features in the Starwind software that I found extremely cool. Also the simple setup and configuration process of the solution is truly remarkable. It makes testing the different configurations fast and easy.

To name a few features that got my attention while testing the software, that other users could benefit of both in regard of lab testing and for production workloads.

  • Use of defined amount of RAM for cache for each defined iSCSI device.

This allows me to define the amount of RAM assigned for the NAS storage role, keeping RAM available to other workloads on the server. This also allows me to define different devices and iSCSI target with different amount of RAM depending on workload types. Keep in mind that if you assign many GB’s of RAM for cache in a production setup, make sure you have a UPS to be able to commit all cached writes to disk!

  • Create a RAM based disk device.

Using this super-fast iSCSI target is great for testing and deploying temporary workloads in the lab. I plan to experiment with this feature more, but keep in mind this in in memory, so data is not written to any persistence storage! Non-persistence VDI disks (linked-clones) come in mind or classroom VM’S could use this feature to give great end-user experience.

  • Log-Structured File system while thin-provisioning the storage device.
    This feature turns otherwise “all writes are random” situation while running mixed virtual workloads, into sequential write on the underling storage. A whitepaper (https://www.starwindsoftware.com/whitepapers/eliminating-the-io-blender-by-jon-toigo.pdf) by Jon Toigo explains this in great detail, but this features boosts the benefits of thin-provisioning to a whole new level!
  • Publish a physical disk directly as an iSCSI target.
    This feature caught my eye, and I still have to investigate the pros and cons in this regard.

 The Network design

To give out a clear picture of my setup, I made the following diagrams.

Layer 1 Diagram

Picture 1: Cabling layout

  • 2 x 1Gbit network interfaces are connected from each ESXi host to the iSCSI NAS host.
  • 2 x 1Gbit network interfaces are used for vMotion and replication.

Layer 2-3 Diagram

Picture 2: Layer 2-3 diagram

The diagram shows the networking layout of the 2 iSCSI networks. Different subnets are used for each physical adapter assigned to iSCSI to provide active-active paths to the iSCSI target machine.
Path selection Policy is set to “Round Robin” for link load balancing

vMotion network between the hosts are bound to 2 physical network adapters, on a single subnet.

Storage design

For testing purposes, I decided to install Windows 2012 directly on a 2 disk mirror, and have the 4 extra drive slots to test different RAID levels and drive types. This allowed me to run the LSI MegaRaid Manager software and set different settings on the volumes and save me the reboot time when changing raid levels or drive types.

I had 4 x 2TB, 7.4K SATA drivers and 4 x 600GB, 15K SAS drives to test.

 Different Raid Levels and drive types.

First I tested out different RAID levels and on both types of drives, and ran FIO tests locally on the volume created.

Different Raid Levels

It caught my eye that when using the SATA drives, performance gain from Raid 10 to Raid 0 was minimal, while the SAS drives had huge performance gain while running Raid 0 vs Raid 10. Later I plan to do a 6 x 2TB SATA drive Raid 10, and that’s most likely the configuration I’ll end up using for my lab setup.

For the remaining of the performance tests, I ran the Raid 10 setup on the 4 x 15K drives, and the main goal was to find out if the different deployment options on the Starwind SAN software made any measurable difference, and also to see how it performed against the native Windows 2012R2 iSCSI target.

CrystalDiskMark tests

First test was done by using CrystalDiskMark measuring MB/Sec

CrystalDiskMark MB/secCrystalDiskMark IOPS

The tests show that in any configuration, the Starwind SAN software outperforms the Windows 2012R2 Built in iSCSI target solution by far. The only tests where the Windows iSCSI target was close was while testing sequential reads or writes, and I believe the limiting factor was the single threaded process and use of one network connection between the 2 physical machines.

All the random reads and writes tests showed huge benefits while using the Starwind solution. The CrystalDiskMark is a simple tool to test disk performance and it does not allow you to change from the fixed 4K block size, or go beyond the queue depth of 32.
The H700 controller on the iSCSI target machine has queue depth of 975 and to utilize the 2x 1GB network connection I moved from the CrystalDiskMark to more customable test tool, FIO.

To create a baseline and to get the maximum performance without the limitation of my 2 x 1GB network connections between hosts, I ran all tests both locally on the iSCSI target machine and on a remove VM. To test the performance running locally, I mapped a set of iSCSI targets as drives on the windows iSCSI target machine and an identical set of targets to my ESXi host.

The FIO test setup.

Each Starwind iSCSI target configured with 10GB Memory Cache

VM runs on a ESXi 6.0 Hosts, connected by 2 x 1GB Network cards, each configured on separate Subnets, – Round Robin PSP selected

FIO WindowsIO Engine settings:
Random Read/Write:    33/66
Block Size:                          64K
Queue Depth:                  975
4 x 15GB Jobs, 4 files each

FIO MB/sec

FIO IOPS

Direct = FIO Run directly on iSCSI target machine disk volume
Flat = Starwind iSCSI Target with Flat provisioned Image file
LSFS = Starwind iSCSI Target with Thin provisioned disk using LSFS
LSFS Dedup = Starwind iSCSI Target with Thin provisioned disk using LSFS and Deduplication enabled
Physical Disk = Starwind iSCSI Target from physical disk

The direct testing showed how much performance I could get from direct disk access. As I ran those tests, I got a clear picture of the different deployment options in the Starwind SAN software and my findings showed that the Thin Provisioned disk utilizing the LSFS was the fastest option.

While testing deduplication, performance dropped to some degree compared to the LSFS option in regard of IOPS. I also noticed some (5-7%) increased CPU load on the iSCSI target machine while I was running the tests. Also keep in mind that each 1 x TB of deduplicated storage requires 3.5GB of RAM. In my setup this was not an issue but if you have limited amount of RAM you should take note of this fact.

Future plans and few points.

Later, when I have finished the performance tests, I plan to create a target device, for the system drives for my home network VM’s, using deduplication, and save space there, but I’ll leave that option disabled for the PLEX media library and also the photo library as those media files are unlikely to be good candidates for deduplication.

When rebooting the iSCSI target machine, I noticed that the FLAT file and Physical DISK targets were active soon after boot time, but the thin provisioned LSFS and LSFS Dedup targets took some time to become active. After some investigation I saw the LSFS files were all read though, most likely due to file-checking and verification. My test targets were all 100GB in size and it took some time (5-10 minutes) to become active. When evaluating the benefits of FLAT or Physical targets, I guess if you have large targets (3TB as in my case for PLEX media library) you would prefer to use the FLAT file option there to have the targets online soon after reboot.

Conclusion

For a 2-3 hosts setup like mine, or even 1 host installation, it is clearly beneficial to use the Starwind SAN iSCSI software rather than direct disk access or native Windows iSCSI target software.

My findings on different deployment options will hopefully help you decide on what to go with both in your lab or production installations.

A colleague of mine pointed out that my home lab had more performance than many of his client’s production setups, and told me that if I was happy with the performance of the Starwind SAN software, he could recommend it to his clients for production!