Category Archives: Blog Posts

Improvements in Network I/O Control for vSphere 6

Improvements in Network I/O Control for vSphere 6

Improvements in Network I/O Control for vSphere 6

Network I/O Control (NetIOC) in VMware vSphere 6 has been enhanced to support a number of exciting new features such as bandwidth reservations. A new paper published by the Performance Engineering team shows the performance of these new features. The paper also explores the performance impact of the new NetIOC algorithm. Later tests show that NetIOC offers […]


VMware Advocacy

PernixData FVP and Citrix Netscaler, a killer combo.

PernixData FVP and Citrix Netscaler, a killer combo.

Those 2 technologies are playing in different playgrounds, but they have a lot in common in regard of their purpose and ideology. They both use smart software technologies to save you money by moving workload from the classic components of your datacenter. Storage, network and compute all are saved from load and operational risk by the two products. Here is an overview of my thoughts on this matter.

First I wanted to write a brief overview of PernixData FVP, the idea, installation and settings options.

PernixData FVP is a software based storage acceleration platform, which can utilize both flash media and RAM to cache both read and write IO to the storage you have under VMware’s ESXi Hosts. FVP software is a 3 part system. Kernel module installed at each ESXi host, a management service and the vCenter plugin (available both for the legacy client and the web client). The software has a small footprint, and you can easily install the FVP management service on your vCenter Windows server, or if you prefer on a separate Windows machine. The installation is plain and simple, you need a database for the configuration and performance graphs, but otherwise the install is straight forward. You connect it to your vCenter and installation is pretty much done. Configuration is done though the vCenter client, through it you install the license, but you have to be on the same system as the FVP is installed to activate as the license is host based. It’s recommended that you install a valid SSL certificate for the FVP service, ether from your domain’s Certificate Store, or by using your public SSL vendor’s certificate. How to do this is explained here: https://pernixdatainc.force.com/articles/KB_Article/Creating-custom-SSL-certificate-for-FVP

When that is done you can connect as usual from your workstation or terminal service and create your FVP cluster.

What the software allows you to do then, is to create an accelerated cluster (FVP Cluster), which you assign ether RAM, Flash or both. If you have the standard license you can choose either one, but if you have the Enterprise license you can mix hosts with RAM assigned as cache medium and hosts with Flash assigned. You can mix those in the cluster, but only one type at each host per cluster. You can however create a new FVP cluster with the same hosts, one with RAM and other with Flash, and then move VM’s between FVP clusters to utilize ether one of the options. (The same VM can’t use the two types at the same time, but the host can service multiple FVP clusters. Please note that the FVP cluster in not the same as you normal ESXi host cluster.

When you have created the FVP cluster, assigned acceleration media to it, and moved some vm’s over to the hosts, you have the option to select ether the VM’s to accelerate on that FVP cluster, or to make things easy for you, you can select the datastore, and then all vm’s on that datastore gets the acceleration methods you select.

You can choice from “write though” and “write back”, or function wise, ether you accelerate read requests, or you accelerate both read and write requests. – A good read on the subject is found here: http://frankdenneman.nl/2013/07/19/write-back-and-write-through-policies-in-fvp/ .

When you have set things up, you can start to look at the performance data, and soon after you see the software accelerate you storage IO, you can then look at your storage system, and see how it gets a huge reduction in IO.

Give it a few hours to utilize the cache media, and on day 2 I can promise you that you want to license more hosts!

The idea and business case for this is of course to give you better performance, but not less important is the fact that you can save IOPS from the storage array and therefore save money on expensive SAN upgrades.

 

And where does Citrix Netscaler fit in all this? – Surely that is not in play in this respect of ESXi hosts and SAN storage. Netscaler gives you a lot of features in regard of networking and application functions, load balance, content switching, application firewall to name a few.

What I’m going to write about in context of this blog post, are the acceleration features.

There are few functions to mention in this regard and they are SSL Offloading, Integrated caching, Compression and TCP optimization.

All serve to offload your backend services from workloads, hence save IO on your datacenter network, compute and storage.

SSL offloading works by installing the SSL certificate you would normally install on your webserver, on to your Netscaler appliance (Netscaler comes as an appliance, or as a VM). The Appliance has dedicated SSL cards that take care of the otherwise CPU intensive process of encrypting and decrypting, and if you high SSL traffic load on your services, this offload function can save you a lot of CPU power on the backend, up to 30% of webservers CPU workload can be SSL related workload so there is a lot to save here. It also gives you a single entry point in managing your SSL certificates, where you see their expiry date and you don’t have to have your webservers with multiple IP addresses for each SSL service as your webservers are not service SSL content any more.

Integrated caching is my favorite function. It uses cache-control headers from your web service to determine if the objects requested from the client may be stored in cache, and if so, it uses RAM and flash (optional) to store the content for the next request. You can also set up your own caching rules if the webserver/application admin is unable to control the cache-control headers at the backend. Once the content is in the Netscaler cache, the client’s connecting get the content served from the cache store. When you have a high traffic website, this can save you enormous amounts of network, CPU and storage load in the backend. You can have the Netscaler cache the objects for a very short time, like for some ticketing system data, or for longer time for static content.

Compression can also be moved from the web service to the Netscaler appliance, so your webserver’s CPU can be offloaded from that workload. This feature is also saving you outbound network traffic as your clients receive more content compressed than your web service might be set to compress.

TCP optimization also save your resources. It work by having your clients connecting to the Netscaler appliance, and the Netscaler creates a new TCP session to your backend. Let’s say you have 10.000 concurrent client connections to your website. If Netscaler were not used, your webserver would be overwhelmed by the amount of TCP session as its CPU would be busy just handling all the session and actual data traffic from the web service would be suffocated in TCP control packets. This can easily bring a good performance webserver to its knees even though the actual data served is minimal. What the Netscaler does in this regard, is to multiplex the data traffic into few TCP sessions to the backed services. With this, the backend service can use the server’s CPU to serve actual content instead of spending its time on session handling.

Those 4 functions of the Netscaler appliance all save you load on your backend. One of my customers at work moved their website from a previous 13 physical server’s web farm that was load balanced by DNS round robin method, to a 3 server web farm load balanced and accelerated with our Netscaler appliance. To test the system after installation, I even had at one point 1 backend server active, and the website performance was still ok for a normal day operation.

So with those 2 technologies, Citrix Netscaler on the frontend, and PernixData FVP on the ESXi level, you can save huge amounts of money on both Capex and Opex throughout your datacenter.

I hope this was a useful read and interesting for you.
Cheers.

Regarding the Cloud thing…

A short blog this week as I‘ve been sick in bed for most of the week and my brain has been flushed.

Today I wanted to tell you about the Cloud… – Yes, you read correctly, – A short blog about the Cloud thing…

I’m not going to write about what’s possible to do and what are the benefits  for companies to move their services out to the public cloud. Instead I’m going to write a little bit about the state of mind people have when they are talking about the cloud. Specifically in regard of companies and people here in Iceland.

People here see the cloud as something new and exciting as everyone else, but also believe the cloud must be something that is located out in the big world, hence not something hosted on a computer system located in Iceland.

This seems strange to me, as the IT hosting business in Iceland is focused on marketing Iceland as a good location to host datacenters. Mostly because of cheap electricity but also on the cold air that provide “Free-Cooling” as the marketing people calls it. And far as I know we are doing pretty good job at it getting foreign companies to run their services here in Iceland.

But why are the Icelandic business busy looking the other way to host their applications in the cloud, thinking it must be somewhere out there in the big world… My guess is that this is only a marketing issue. We as the IT hosting people in Iceland need to market ourselves not as the on-premises service and support companies, but as Cloud service providers. Surely we will have some on-premises hybrid setups, but we need to turn our minds and start to believe in our small country a little more.

I think we made a small process in the year 2014, but I look forward to see how year 2015 will turn out for use here in Iceland. I believe if we hold our cards right, we could win the Icelandic cloud seeking companies over.

Regarding home labs

Regarding home labs.

I wanted to share my experience from two years ago, when i decided make use of some old servers from work, to make a home lab.
The short story is and thoughts. Don’t do it!
Why?

Then the long story…

I decided to bring from work 2 old dell PowerEdge 6950 that had been decommissioned and not been used for quite some time. Those 6950 servers are huge rack mounted servers and really heavy 4U units. Each server had 32 GB ram, and 4x dual core AMD CPU’s. So there was plenty of cores and ram to play around with. Somehow I managed to put the units into the trunk in my Volvo S60 and get them home to my basement. I live in a small apartment building where each apartment in the building has a small private storage room, and also there are shared room for washing machines and a dry room. To prepare I set up a small table to put the servers on, installed some power sockets from the light switch socket. I made two 4 inch holes at the top of the wall out to the hallway in front of the drying room. I then created a funnel from the back of the server with 2 outlets. From the outlets I installed two 4 inch dryer hoses that went up to the 2 holes in the wall. I also had an old UPS from work installed on top of the 2 servers and had the heat from it also in the funnel.

Before I had this project started, I had one old home-pc with some 2TB SATA drives installed, and that one I decided to use as an iSCSI storage box for the 2 ESXi hosts. To create the iSCSI network I installed a 2 port Intel nic in all 3 servers and connected one port from the two hosts directly to the nic on the storage server. On the ESXi hosts the other nic were connected to a small home 5 port gig switch I had at hand. From the switch I installed a cat5e cable up to my study room in my apartment on second floor where I have my workstation and an additional ESXi host where I installed my monowall router vm, Symantec NetBackup vm, an AD server vm and vCenter vm.

Everything was awesome at this point. I installed windows 2012 on the old pc in the basement and set up storage spaces on those 2TB disks to provide iSCSI to the 2 ESXi hosts. I created some vm’s to serve my home domain, a secondary AD server vm, a web server, exchange 2010 vm, a pair of windows 2012 fileservers with DFS, Observium monitoring server on a Linux vm for performance and traffic logging, an xymon Linux machine for monitoring and alerting vm, an Citrix Netscaler VPX vm and so on.. The 2 hosts could easily handle the load and I played around with nested ESXi also.

When everything was ready, I decided to write down the status of the electric meter for my apartment and report it back to the power company. My thoughts were to get an accurate report of the usage before and after 1 month of usage with this setup running.

After few days I got a few questions from people in the building regarding what was making all the noise down in the basement, and when I told them what I was doing, they didn’t mind the noise so much, but they were happy that the heat from the servers blew directly into the drying room, and cloths were drying twice as fast than before… I went down to investigate, and surely there was some heat blown from the servers out to the hallway, but nothing was overheating. But then again I got worried that those old servers were generating to much heat than they should be doing, and it might hurt to see the electricity bill for next month. I decided to let the system run until the beginning of next month though. I had pretty good monitoring on the setup, and set up some alerts, and just in case I installed a smoke detector in the storage room. I continued to play around with some vm’s, and I was quite happy with the setup in terms of performance and as this was basically a free installation for me, I thought that even if I had to pay a little extra for electricity, this could work out ok for me.

After a month I reported back to the power company the status of the electricity meter. I saw right away that the bill doubled from previous month before installation! Much more than I had imagined or was willing to pay to have a home lab running.
I calculated based on the usage, that after 6 months I would have spent more money on electricity than the cost of a new setup made up of a new motherboard, 32gb of ddr3 ram and a new intel i7 3770K 4 core CPU. I quickly decided to cancel this lab setup with those 2 old hosts and upgrade the old windows machine I had used as an iSCSI box instead. I went out and bought the new CPU, ram and memory, I also refurbished from work an old dell perc5 raid controller, and installed ESXi on the new box. (I had to modify the perc5 though to run it on a normal desktop Intel chipset pc) I put a large CPU fan in the box, overclocked the 3.5 GHz CPU to 4.5 GHz, and the setup has been running my home lab since. I cut down the numbers of vm’s though as now I only have one 32GB host versus 2 x 32GB hosts, but the CPU performance on this single CPU, 4 core with hyper threading, is so much faster than before.
The power usage of the new host ended up at 1/7 of the old lab setup, and added a reasonable amount to the household’s electricity bill.

Later I bought a hardware Mikrotik router, moved the vCenter vm down to the lab ESXi host, installed a pair of 2 GB disk on my wife’s pc, and I run the backups to those drives. I also got rid of the ESXi host that were running in my study room. I sold off the motherboard, CPU and memory from that host. So after almost two years I think I’m pretty well off in regard of total cost of ownership on my home lab.

Hopefully this has been an interesting blog post for you, and a warning for those who plan to bring old servers to life for a home lab project.

vSphere performance tuning

When you have set up your vSphere services and your sys-admins start to use the web client to manage their vm’s, you might get reports back that the web client is slow compared to the old legacy client.  This might happen after some time, depending on the size of your setup, but at some point you might have to scale out the inizial vSphere installation. After you google the subject on slow web client experience you’ll find out that this is pretty common topic. But what can you do about this?

Here I have listed up several solutions that have helped me to provide a better experience for the sys-admins that manage their vm’s. The list is not ordered by “success rate” and your environment might benefit more from different points in the list. This also is an overkill if you have a small deployment. – But hopefully this can help you providing better user experience for the web client.

  1. Separate the services. Split it down to  4 separate vm’s: SSO and integration service vm. vCenter server vm,  web client service vm and finally update manager vm. This will help you better serve the different types of services and split down the installation to 4 failure zones. This separation will also make life more easy when you upgrade to newer versions of vSphere. Plus those 4 vm’s, you have the database service vm, vMA appliance, replication appliance etc..
  2. Assign right amount of vCPU’s to the vm’s depending on their roles.
  • SSO and integration service vm has 2 primary java processes, but as user performance is not so bound to this vm performance, 1 vCPU should be just fine. If your installation is distributed and you have multiple vCenter installation, you might want to look at this vm better, and separate the integration service from this vm, and have it run on the vCenter service vm.
  • Web client service run one primary java process, and 2 vCPU’s is good for this machine and it’s performance is vital for your sys-admins.
  • vCenter service vm has 2 primary java process running, and 4vCPU vm should be able to serve those processes well.
  • Update manager vm can have 1 vCPU, as it’s workload is not affecting your web client users.
  1. Assign more memory to the java processes than default settings are, and based on my experience, much higher than the guidelines you find on this subject from VMware… I had huge improvements when I did this.  First assign  6GB  RAM to the web client vm, and 24GB to the vCenter vm. Then adjust the java memory settings.
    There are 2 settings to take note of, initial size and max size. Set both values to the same amount. For the web client web service, set it to 2GB, in file; “C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\bin\service\conf\wrapper.conf”
    wrapper.java.initmemory=2048m
    wrapper.java.maxmemory=2048m

and on the vCenter service vm set the 2 main java processes service
“C:\Program Files\VMware\Infrastructure\tomcat\conf\wrapper.conf”
Under “Java Additional Parameters”
wrapper.java.additional.9=”-Xmx16024M
wrapper.java.additional.10=”-Xss16024K

And  it might be beneficial for your to also change this in file “C:\Program Files\VMware\Infrastructure\Profile-Driven Storage\conf\wrapper.conf”

# Initial Java Heap Size (in MB)
wrapper.java.initmemory=1024

# Maximum Java Heap Size (in MB)
wrapper.java.maxmemory=1024

  1. Run the vCenter service vm and the database server vm on the same host, and the other main 3 vm’s on a different host. The database vm and the vCenter service vm have a lot to talk about so to speak, so placing them on the same host helps in regard of both network traffic and latency concerns.
  2. Publish the web client service to your admins via an Citrix Netscaler appliance You get a lot of benefits from this. Just remember to publish both port 9443 for the web service, and port 7331 (or 7343 if you have vSphere 5.5 Update 2) for console access.
    To name a few of the benefits of using Netscaler in front of the web client service vm:
  • SSL off-loading from the web service vm. Even if you use the default SSL port 9443 to the web client service, you terminate and multiplex the TCP sessions on the Netscaler and therefore you get only few sessions to the web service. SSL load on the web service vm is moved to the SSL chips on the Netscaler so you have less CPU load on the vm.
  • Use http to https redirection . you can tell your sys-admins to browse directly to the DNS name you set. Like for example http://vsphere.company.local and the don’t have to worry about having to remember port 9443, or type https, as you set up the redirection and port translation on the Netscaler virtual server and service.
  • Use in memory cache in Netscaler to offload from the web service vm.
  • Use acl’s to control what subnets or ip’s can access the web client service.

I hope this guide can help your vSphere Web Client to run faster, – and bear in mind, the settings on Java memory sized might not be the best options for your setup, but this is what I have done to tune my installation to run more smoothly.

 

I use Veeam for backups of my environment, and it generates a quite amount of load on the vCenter server vm, so give it plenty of resources .

Some projects that I’m working on

I have a set of projects going on now this week. – to list a few 🙂

Veeam Cloud Connect – Testing with @olafurh at our work lab. So easy to be an Veeam client and point to a Cloud repository.

Netscaler 10.5 SPDY – functionality with Exchange 2013 OWA, – seems to be broken or maybe not supported?
PernixData FVP – RAM acceleration, – Testing at work -It’s soooooo crazy fast!

PernixData Case Study – Read it here at PernixData website .

VMware Partner University – Getting some new online tests done, – Have done like million of those allready, – have a look at my linked in profile to see the list. – I was at #4 at one point in the leader board, then the website broke and I don’t get my score updated… (have to be #1 🙂

Veeam Certified Engineer (VMCE) – Took the 3 day course this week, – @haslund  was a great teacher! – Waiting for for the V8 test to be available.

VMUG-Iceland is in the registration process. – I’m waiting to get and VMware SE assigned. Website is not ready yet, but domain vmug.is is registered along with Facebook Page has been created.