I wanted to test VSAN in my lab without having to go out and buy SSD’s or invest in more hardware in my lab
The obvious path was to spin up several ESXi VM’s and do the settings in regard of networking and set normal HDD based volume as SSD. And to make things easy for you I took screenshots and wrote down every setting and step I made in this blog post.
To prepare networking for the ESXi VM’s you have to set Promiscuous mode to “Accept” in the security settings for the portgroup you place your ESXi VM’s on. You should not do this in a production installation on your whole vSwitch. In my lab I created a “NestedESXi” portgroup, where I enabled promiscuous mode by overriding the vSwitch default setting of “Reject” a VMware KB article explains this a bit more
This allows packets to travel from your physical nic on your ESXi host, up to the virtual nic of your virtual ESXi host, and up to its virtual VM’s virtual nic. Think inception + communications between each state.
Next thing to do is to create the ESXi VM’s. Select “Other” in “Guest OS Family”, and select “VMware ESXi 6.x” under “Guest OS Version.
This is pretty straight forward, but there is one setting in the “customize hardware” tab, and that is the option to set “Expose Hardware assisted virtualization to the guest OS” under CPU section.
Other settings on the VM’ is 2 cpu and 16GB of RAM (VSAN 6.0 memory requirements) state that each host should have a minimum of 32GB memory to accommodate the maximum number of 5 disk groups and a maximum of 7 capacity devices per disk group, – but in this lab test where I will only present 1 SSD and 1 HDD to the VSAN cluster, 16GB for the ESXi VM should work fine.
For the disks, I add one 4 GB disk for ESXi Installation, one 50GB disk to act as a simulated SSD disk, and one 150GB disk to act as a capacity device
I also in this step I select the “NestedESXi” network port group I prepared earlier.
I created 3 identical ESXi vm’s like this, and on more that had no extra hard disks, to test out the remote storage access of the VSAN cluster. VSAN requires 3 hosts as a minimum, with minimum 1 flash device and 1 spinning disk.
Next thing is to add the VM’s to vCenter as ESXi hosts.
I had earlier assigned IP addresses and created DNS records for the ESXi vm’s and I added the new hosts into a folder just for housekeeping reasons.
Before I create the VSAN cluster, I have to prepare the 50GB Hard disks and mark them as flash disk. In vSphere 6.0 this is really simple, just select the disk device and click the “F” button
This gives me a confirmation dialog to mark the selected disk as flash disk, and there you hit “yes”
This will mark the drive type to Flash
I also have to prepare a VMkernel port for VSAN SAN traffic. In this lab I’ll use the default vmk0 adapter for both management, vmotion and VSAN traffic. In production you should separate this though.
I do this for the other 2 ESXi VM’s and now everything is set up to create a VSAN Cluster.
To enable VSAN, select the “Turn on” checkbox under “Virtual SAN”
And then add your nested ESXi Hosts to the cluster.
After a minute or two, all the disks for the nested ESXi hosts automatically joined the VSAN and created a vsanDatastore.
And that’s it! – Now I have a VSAN datastore in my nested ESXi cluster.
As this is nested, using “fake” flash devices, I don’t expect to get much performance out of this, but for testing the process of creating a VSAN cluster this setup works great.
I hope you like this post, and send me your thoughs in the comments or on twitter.