Tag Archives: flash

VSAN 6.0 in a nested ESXi 6.0 lab

I wanted to test VSAN in my lab without having to go out and buy SSD’s or invest in more hardware in my lab

The obvious path was to spin up several ESXi VM’s and do the settings in regard of networking and set normal HDD based volume as SSD. And to make things easy for you I took screenshots and wrote down every setting and step I made in this blog post.

 

 

 

To prepare networking for the ESXi VM’s you have to set Promiscuous mode to “Accept” in the security settings for the portgroup you place your ESXi VM’s on. You should not do this in a production installation on your whole vSwitch. In my lab I created a “NestedESXi” portgroup, where I enabled promiscuous mode by overriding the vSwitch default setting of “Reject” a VMware KB article explains this a bit more

This allows packets to travel from your physical nic on your ESXi host, up to the virtual nic of your virtual ESXi host, and up to its virtual VM’s virtual nic. Think inception + communications between each state.

Next thing to do is to create the ESXi VM’s. Select “Other” in “Guest OS Family”, and select “VMware ESXi 6.x” under “Guest OS Version.

This is pretty straight forward, but there is one setting in the “customize hardware” tab, and that is the option to set “Expose Hardware assisted virtualization to the guest OS” under CPU section.

Other settings on the VM’ is 2 cpu and 16GB of RAM (VSAN 6.0 memory requirements) state that each host should have a minimum of 32GB memory to accommodate the maximum number of 5 disk groups and a maximum of 7 capacity devices per disk group, – but in this lab test where I will only present 1 SSD and 1 HDD to the VSAN cluster, 16GB for the ESXi VM should work fine.

For the disks, I add one 4 GB disk for ESXi Installation, one 50GB disk to act as a simulated SSD disk, and one 150GB disk to act as a capacity device

I also in this step I select the “NestedESXi” network port group I prepared earlier.

I created 3 identical ESXi vm’s like this, and on more that had no extra hard disks, to test out the remote storage access of the VSAN cluster. VSAN requires 3 hosts as a minimum, with minimum 1 flash device and 1 spinning disk.

Next thing is to add the VM’s to vCenter as ESXi hosts.

I had earlier assigned IP addresses and created DNS records for the ESXi vm’s and I added the new hosts into a folder just for housekeeping reasons.

Before I create the VSAN cluster, I have to prepare the 50GB Hard disks and mark them as flash disk. In vSphere 6.0 this is really simple, just select the disk device and click the “F” button

This gives me a confirmation dialog to mark the selected disk as flash disk, and there you hit “yes”

This will mark the drive type to Flash

I also have to prepare a VMkernel port for VSAN SAN traffic. In this lab I’ll use the default vmk0 adapter for both management, vmotion and VSAN traffic. In production you should separate this though.

I do this for the other 2 ESXi VM’s and now everything is set up to create a VSAN Cluster.

To enable VSAN, select the “Turn on” checkbox under “Virtual SAN”

And then add your nested ESXi Hosts to the cluster.

After a minute or two, all the disks for the nested ESXi hosts automatically joined the VSAN and created a vsanDatastore.

And that’s it! – Now I have a VSAN datastore in my nested ESXi cluster.

As this is nested, using “fake” flash devices, I don’t expect to get much performance out of this, but for testing the process of creating a VSAN cluster this setup works great.

I hope you like this post, and send me your thoughs in the comments or on twitter.

 

PernixData FVP and Citrix Netscaler, a killer combo.

PernixData FVP and Citrix Netscaler, a killer combo.

Those 2 technologies are playing in different playgrounds, but they have a lot in common in regard of their purpose and ideology. They both use smart software technologies to save you money by moving workload from the classic components of your datacenter. Storage, network and compute all are saved from load and operational risk by the two products. Here is an overview of my thoughts on this matter.

First I wanted to write a brief overview of PernixData FVP, the idea, installation and settings options.

PernixData FVP is a software based storage acceleration platform, which can utilize both flash media and RAM to cache both read and write IO to the storage you have under VMware’s ESXi Hosts. FVP software is a 3 part system. Kernel module installed at each ESXi host, a management service and the vCenter plugin (available both for the legacy client and the web client). The software has a small footprint, and you can easily install the FVP management service on your vCenter Windows server, or if you prefer on a separate Windows machine. The installation is plain and simple, you need a database for the configuration and performance graphs, but otherwise the install is straight forward. You connect it to your vCenter and installation is pretty much done. Configuration is done though the vCenter client, through it you install the license, but you have to be on the same system as the FVP is installed to activate as the license is host based. It’s recommended that you install a valid SSL certificate for the FVP service, ether from your domain’s Certificate Store, or by using your public SSL vendor’s certificate. How to do this is explained here: https://pernixdatainc.force.com/articles/KB_Article/Creating-custom-SSL-certificate-for-FVP

When that is done you can connect as usual from your workstation or terminal service and create your FVP cluster.

What the software allows you to do then, is to create an accelerated cluster (FVP Cluster), which you assign ether RAM, Flash or both. If you have the standard license you can choose either one, but if you have the Enterprise license you can mix hosts with RAM assigned as cache medium and hosts with Flash assigned. You can mix those in the cluster, but only one type at each host per cluster. You can however create a new FVP cluster with the same hosts, one with RAM and other with Flash, and then move VM’s between FVP clusters to utilize ether one of the options. (The same VM can’t use the two types at the same time, but the host can service multiple FVP clusters. Please note that the FVP cluster in not the same as you normal ESXi host cluster.

When you have created the FVP cluster, assigned acceleration media to it, and moved some vm’s over to the hosts, you have the option to select ether the VM’s to accelerate on that FVP cluster, or to make things easy for you, you can select the datastore, and then all vm’s on that datastore gets the acceleration methods you select.

You can choice from “write though” and “write back”, or function wise, ether you accelerate read requests, or you accelerate both read and write requests. – A good read on the subject is found here: http://frankdenneman.nl/2013/07/19/write-back-and-write-through-policies-in-fvp/ .

When you have set things up, you can start to look at the performance data, and soon after you see the software accelerate you storage IO, you can then look at your storage system, and see how it gets a huge reduction in IO.

Give it a few hours to utilize the cache media, and on day 2 I can promise you that you want to license more hosts!

The idea and business case for this is of course to give you better performance, but not less important is the fact that you can save IOPS from the storage array and therefore save money on expensive SAN upgrades.

 

And where does Citrix Netscaler fit in all this? – Surely that is not in play in this respect of ESXi hosts and SAN storage. Netscaler gives you a lot of features in regard of networking and application functions, load balance, content switching, application firewall to name a few.

What I’m going to write about in context of this blog post, are the acceleration features.

There are few functions to mention in this regard and they are SSL Offloading, Integrated caching, Compression and TCP optimization.

All serve to offload your backend services from workloads, hence save IO on your datacenter network, compute and storage.

SSL offloading works by installing the SSL certificate you would normally install on your webserver, on to your Netscaler appliance (Netscaler comes as an appliance, or as a VM). The Appliance has dedicated SSL cards that take care of the otherwise CPU intensive process of encrypting and decrypting, and if you high SSL traffic load on your services, this offload function can save you a lot of CPU power on the backend, up to 30% of webservers CPU workload can be SSL related workload so there is a lot to save here. It also gives you a single entry point in managing your SSL certificates, where you see their expiry date and you don’t have to have your webservers with multiple IP addresses for each SSL service as your webservers are not service SSL content any more.

Integrated caching is my favorite function. It uses cache-control headers from your web service to determine if the objects requested from the client may be stored in cache, and if so, it uses RAM and flash (optional) to store the content for the next request. You can also set up your own caching rules if the webserver/application admin is unable to control the cache-control headers at the backend. Once the content is in the Netscaler cache, the client’s connecting get the content served from the cache store. When you have a high traffic website, this can save you enormous amounts of network, CPU and storage load in the backend. You can have the Netscaler cache the objects for a very short time, like for some ticketing system data, or for longer time for static content.

Compression can also be moved from the web service to the Netscaler appliance, so your webserver’s CPU can be offloaded from that workload. This feature is also saving you outbound network traffic as your clients receive more content compressed than your web service might be set to compress.

TCP optimization also save your resources. It work by having your clients connecting to the Netscaler appliance, and the Netscaler creates a new TCP session to your backend. Let’s say you have 10.000 concurrent client connections to your website. If Netscaler were not used, your webserver would be overwhelmed by the amount of TCP session as its CPU would be busy just handling all the session and actual data traffic from the web service would be suffocated in TCP control packets. This can easily bring a good performance webserver to its knees even though the actual data served is minimal. What the Netscaler does in this regard, is to multiplex the data traffic into few TCP sessions to the backed services. With this, the backend service can use the server’s CPU to serve actual content instead of spending its time on session handling.

Those 4 functions of the Netscaler appliance all save you load on your backend. One of my customers at work moved their website from a previous 13 physical server’s web farm that was load balanced by DNS round robin method, to a 3 server web farm load balanced and accelerated with our Netscaler appliance. To test the system after installation, I even had at one point 1 backend server active, and the website performance was still ok for a normal day operation.

So with those 2 technologies, Citrix Netscaler on the frontend, and PernixData FVP on the ESXi level, you can save huge amounts of money on both Capex and Opex throughout your datacenter.

I hope this was a useful read and interesting for you.
Cheers.