Tag Archives: lab

VSAN 6.0 in a nested ESXi 6.0 lab

I wanted to test VSAN in my lab without having to go out and buy SSD’s or invest in more hardware in my lab

The obvious path was to spin up several ESXi VM’s and do the settings in regard of networking and set normal HDD based volume as SSD. And to make things easy for you I took screenshots and wrote down every setting and step I made in this blog post.

 

 

 

To prepare networking for the ESXi VM’s you have to set Promiscuous mode to “Accept” in the security settings for the portgroup you place your ESXi VM’s on. You should not do this in a production installation on your whole vSwitch. In my lab I created a “NestedESXi” portgroup, where I enabled promiscuous mode by overriding the vSwitch default setting of “Reject” a VMware KB article explains this a bit more

This allows packets to travel from your physical nic on your ESXi host, up to the virtual nic of your virtual ESXi host, and up to its virtual VM’s virtual nic. Think inception + communications between each state.

Next thing to do is to create the ESXi VM’s. Select “Other” in “Guest OS Family”, and select “VMware ESXi 6.x” under “Guest OS Version.

This is pretty straight forward, but there is one setting in the “customize hardware” tab, and that is the option to set “Expose Hardware assisted virtualization to the guest OS” under CPU section.

Other settings on the VM’ is 2 cpu and 16GB of RAM (VSAN 6.0 memory requirements) state that each host should have a minimum of 32GB memory to accommodate the maximum number of 5 disk groups and a maximum of 7 capacity devices per disk group, – but in this lab test where I will only present 1 SSD and 1 HDD to the VSAN cluster, 16GB for the ESXi VM should work fine.

For the disks, I add one 4 GB disk for ESXi Installation, one 50GB disk to act as a simulated SSD disk, and one 150GB disk to act as a capacity device

I also in this step I select the “NestedESXi” network port group I prepared earlier.

I created 3 identical ESXi vm’s like this, and on more that had no extra hard disks, to test out the remote storage access of the VSAN cluster. VSAN requires 3 hosts as a minimum, with minimum 1 flash device and 1 spinning disk.

Next thing is to add the VM’s to vCenter as ESXi hosts.

I had earlier assigned IP addresses and created DNS records for the ESXi vm’s and I added the new hosts into a folder just for housekeeping reasons.

Before I create the VSAN cluster, I have to prepare the 50GB Hard disks and mark them as flash disk. In vSphere 6.0 this is really simple, just select the disk device and click the “F” button

This gives me a confirmation dialog to mark the selected disk as flash disk, and there you hit “yes”

This will mark the drive type to Flash

I also have to prepare a VMkernel port for VSAN SAN traffic. In this lab I’ll use the default vmk0 adapter for both management, vmotion and VSAN traffic. In production you should separate this though.

I do this for the other 2 ESXi VM’s and now everything is set up to create a VSAN Cluster.

To enable VSAN, select the “Turn on” checkbox under “Virtual SAN”

And then add your nested ESXi Hosts to the cluster.

After a minute or two, all the disks for the nested ESXi hosts automatically joined the VSAN and created a vsanDatastore.

And that’s it! – Now I have a VSAN datastore in my nested ESXi cluster.

As this is nested, using “fake” flash devices, I don’t expect to get much performance out of this, but for testing the process of creating a VSAN cluster this setup works great.

I hope you like this post, and send me your thoughs in the comments or on twitter.

 

Regarding home labs

Regarding home labs.

I wanted to share my experience from two years ago, when i decided make use of some old servers from work, to make a home lab.
The short story is and thoughts. Don’t do it!
Why?

Then the long story…

I decided to bring from work 2 old dell PowerEdge 6950 that had been decommissioned and not been used for quite some time. Those 6950 servers are huge rack mounted servers and really heavy 4U units. Each server had 32 GB ram, and 4x dual core AMD CPU’s. So there was plenty of cores and ram to play around with. Somehow I managed to put the units into the trunk in my Volvo S60 and get them home to my basement. I live in a small apartment building where each apartment in the building has a small private storage room, and also there are shared room for washing machines and a dry room. To prepare I set up a small table to put the servers on, installed some power sockets from the light switch socket. I made two 4 inch holes at the top of the wall out to the hallway in front of the drying room. I then created a funnel from the back of the server with 2 outlets. From the outlets I installed two 4 inch dryer hoses that went up to the 2 holes in the wall. I also had an old UPS from work installed on top of the 2 servers and had the heat from it also in the funnel.

Before I had this project started, I had one old home-pc with some 2TB SATA drives installed, and that one I decided to use as an iSCSI storage box for the 2 ESXi hosts. To create the iSCSI network I installed a 2 port Intel nic in all 3 servers and connected one port from the two hosts directly to the nic on the storage server. On the ESXi hosts the other nic were connected to a small home 5 port gig switch I had at hand. From the switch I installed a cat5e cable up to my study room in my apartment on second floor where I have my workstation and an additional ESXi host where I installed my monowall router vm, Symantec NetBackup vm, an AD server vm and vCenter vm.

Everything was awesome at this point. I installed windows 2012 on the old pc in the basement and set up storage spaces on those 2TB disks to provide iSCSI to the 2 ESXi hosts. I created some vm’s to serve my home domain, a secondary AD server vm, a web server, exchange 2010 vm, a pair of windows 2012 fileservers with DFS, Observium monitoring server on a Linux vm for performance and traffic logging, an xymon Linux machine for monitoring and alerting vm, an Citrix Netscaler VPX vm and so on.. The 2 hosts could easily handle the load and I played around with nested ESXi also.

When everything was ready, I decided to write down the status of the electric meter for my apartment and report it back to the power company. My thoughts were to get an accurate report of the usage before and after 1 month of usage with this setup running.

After few days I got a few questions from people in the building regarding what was making all the noise down in the basement, and when I told them what I was doing, they didn’t mind the noise so much, but they were happy that the heat from the servers blew directly into the drying room, and cloths were drying twice as fast than before… I went down to investigate, and surely there was some heat blown from the servers out to the hallway, but nothing was overheating. But then again I got worried that those old servers were generating to much heat than they should be doing, and it might hurt to see the electricity bill for next month. I decided to let the system run until the beginning of next month though. I had pretty good monitoring on the setup, and set up some alerts, and just in case I installed a smoke detector in the storage room. I continued to play around with some vm’s, and I was quite happy with the setup in terms of performance and as this was basically a free installation for me, I thought that even if I had to pay a little extra for electricity, this could work out ok for me.

After a month I reported back to the power company the status of the electricity meter. I saw right away that the bill doubled from previous month before installation! Much more than I had imagined or was willing to pay to have a home lab running.
I calculated based on the usage, that after 6 months I would have spent more money on electricity than the cost of a new setup made up of a new motherboard, 32gb of ddr3 ram and a new intel i7 3770K 4 core CPU. I quickly decided to cancel this lab setup with those 2 old hosts and upgrade the old windows machine I had used as an iSCSI box instead. I went out and bought the new CPU, ram and memory, I also refurbished from work an old dell perc5 raid controller, and installed ESXi on the new box. (I had to modify the perc5 though to run it on a normal desktop Intel chipset pc) I put a large CPU fan in the box, overclocked the 3.5 GHz CPU to 4.5 GHz, and the setup has been running my home lab since. I cut down the numbers of vm’s though as now I only have one 32GB host versus 2 x 32GB hosts, but the CPU performance on this single CPU, 4 core with hyper threading, is so much faster than before.
The power usage of the new host ended up at 1/7 of the old lab setup, and added a reasonable amount to the household’s electricity bill.

Later I bought a hardware Mikrotik router, moved the vCenter vm down to the lab ESXi host, installed a pair of 2 GB disk on my wife’s pc, and I run the backups to those drives. I also got rid of the ESXi host that were running in my study room. I sold off the motherboard, CPU and memory from that host. So after almost two years I think I’m pretty well off in regard of total cost of ownership on my home lab.

Hopefully this has been an interesting blog post for you, and a warning for those who plan to bring old servers to life for a home lab project.