Monthly Archives: December 2014

Regarding the Cloud thing…

A short blog this week as I‘ve been sick in bed for most of the week and my brain has been flushed.

Today I wanted to tell you about the Cloud… – Yes, you read correctly, – A short blog about the Cloud thing…

I’m not going to write about what’s possible to do and what are the benefits  for companies to move their services out to the public cloud. Instead I’m going to write a little bit about the state of mind people have when they are talking about the cloud. Specifically in regard of companies and people here in Iceland.

People here see the cloud as something new and exciting as everyone else, but also believe the cloud must be something that is located out in the big world, hence not something hosted on a computer system located in Iceland.

This seems strange to me, as the IT hosting business in Iceland is focused on marketing Iceland as a good location to host datacenters. Mostly because of cheap electricity but also on the cold air that provide “Free-Cooling” as the marketing people calls it. And far as I know we are doing pretty good job at it getting foreign companies to run their services here in Iceland.

But why are the Icelandic business busy looking the other way to host their applications in the cloud, thinking it must be somewhere out there in the big world… My guess is that this is only a marketing issue. We as the IT hosting people in Iceland need to market ourselves not as the on-premises service and support companies, but as Cloud service providers. Surely we will have some on-premises hybrid setups, but we need to turn our minds and start to believe in our small country a little more.

I think we made a small process in the year 2014, but I look forward to see how year 2015 will turn out for use here in Iceland. I believe if we hold our cards right, we could win the Icelandic cloud seeking companies over.

Regarding home labs

Regarding home labs.

I wanted to share my experience from two years ago, when i decided make use of some old servers from work, to make a home lab.
The short story is and thoughts. Don’t do it!

Then the long story…

I decided to bring from work 2 old dell PowerEdge 6950 that had been decommissioned and not been used for quite some time. Those 6950 servers are huge rack mounted servers and really heavy 4U units. Each server had 32 GB ram, and 4x dual core AMD CPU’s. So there was plenty of cores and ram to play around with. Somehow I managed to put the units into the trunk in my Volvo S60 and get them home to my basement. I live in a small apartment building where each apartment in the building has a small private storage room, and also there are shared room for washing machines and a dry room. To prepare I set up a small table to put the servers on, installed some power sockets from the light switch socket. I made two 4 inch holes at the top of the wall out to the hallway in front of the drying room. I then created a funnel from the back of the server with 2 outlets. From the outlets I installed two 4 inch dryer hoses that went up to the 2 holes in the wall. I also had an old UPS from work installed on top of the 2 servers and had the heat from it also in the funnel.

Before I had this project started, I had one old home-pc with some 2TB SATA drives installed, and that one I decided to use as an iSCSI storage box for the 2 ESXi hosts. To create the iSCSI network I installed a 2 port Intel nic in all 3 servers and connected one port from the two hosts directly to the nic on the storage server. On the ESXi hosts the other nic were connected to a small home 5 port gig switch I had at hand. From the switch I installed a cat5e cable up to my study room in my apartment on second floor where I have my workstation and an additional ESXi host where I installed my monowall router vm, Symantec NetBackup vm, an AD server vm and vCenter vm.

Everything was awesome at this point. I installed windows 2012 on the old pc in the basement and set up storage spaces on those 2TB disks to provide iSCSI to the 2 ESXi hosts. I created some vm’s to serve my home domain, a secondary AD server vm, a web server, exchange 2010 vm, a pair of windows 2012 fileservers with DFS, Observium monitoring server on a Linux vm for performance and traffic logging, an xymon Linux machine for monitoring and alerting vm, an Citrix Netscaler VPX vm and so on.. The 2 hosts could easily handle the load and I played around with nested ESXi also.

When everything was ready, I decided to write down the status of the electric meter for my apartment and report it back to the power company. My thoughts were to get an accurate report of the usage before and after 1 month of usage with this setup running.

After few days I got a few questions from people in the building regarding what was making all the noise down in the basement, and when I told them what I was doing, they didn’t mind the noise so much, but they were happy that the heat from the servers blew directly into the drying room, and cloths were drying twice as fast than before… I went down to investigate, and surely there was some heat blown from the servers out to the hallway, but nothing was overheating. But then again I got worried that those old servers were generating to much heat than they should be doing, and it might hurt to see the electricity bill for next month. I decided to let the system run until the beginning of next month though. I had pretty good monitoring on the setup, and set up some alerts, and just in case I installed a smoke detector in the storage room. I continued to play around with some vm’s, and I was quite happy with the setup in terms of performance and as this was basically a free installation for me, I thought that even if I had to pay a little extra for electricity, this could work out ok for me.

After a month I reported back to the power company the status of the electricity meter. I saw right away that the bill doubled from previous month before installation! Much more than I had imagined or was willing to pay to have a home lab running.
I calculated based on the usage, that after 6 months I would have spent more money on electricity than the cost of a new setup made up of a new motherboard, 32gb of ddr3 ram and a new intel i7 3770K 4 core CPU. I quickly decided to cancel this lab setup with those 2 old hosts and upgrade the old windows machine I had used as an iSCSI box instead. I went out and bought the new CPU, ram and memory, I also refurbished from work an old dell perc5 raid controller, and installed ESXi on the new box. (I had to modify the perc5 though to run it on a normal desktop Intel chipset pc) I put a large CPU fan in the box, overclocked the 3.5 GHz CPU to 4.5 GHz, and the setup has been running my home lab since. I cut down the numbers of vm’s though as now I only have one 32GB host versus 2 x 32GB hosts, but the CPU performance on this single CPU, 4 core with hyper threading, is so much faster than before.
The power usage of the new host ended up at 1/7 of the old lab setup, and added a reasonable amount to the household’s electricity bill.

Later I bought a hardware Mikrotik router, moved the vCenter vm down to the lab ESXi host, installed a pair of 2 GB disk on my wife’s pc, and I run the backups to those drives. I also got rid of the ESXi host that were running in my study room. I sold off the motherboard, CPU and memory from that host. So after almost two years I think I’m pretty well off in regard of total cost of ownership on my home lab.

Hopefully this has been an interesting blog post for you, and a warning for those who plan to bring old servers to life for a home lab project.

vSphere performance tuning

When you have set up your vSphere services and your sys-admins start to use the web client to manage their vm’s, you might get reports back that the web client is slow compared to the old legacy client.  This might happen after some time, depending on the size of your setup, but at some point you might have to scale out the inizial vSphere installation. After you google the subject on slow web client experience you’ll find out that this is pretty common topic. But what can you do about this?

Here I have listed up several solutions that have helped me to provide a better experience for the sys-admins that manage their vm’s. The list is not ordered by “success rate” and your environment might benefit more from different points in the list. This also is an overkill if you have a small deployment. – But hopefully this can help you providing better user experience for the web client.

  1. Separate the services. Split it down to  4 separate vm’s: SSO and integration service vm. vCenter server vm,  web client service vm and finally update manager vm. This will help you better serve the different types of services and split down the installation to 4 failure zones. This separation will also make life more easy when you upgrade to newer versions of vSphere. Plus those 4 vm’s, you have the database service vm, vMA appliance, replication appliance etc..
  2. Assign right amount of vCPU’s to the vm’s depending on their roles.
  • SSO and integration service vm has 2 primary java processes, but as user performance is not so bound to this vm performance, 1 vCPU should be just fine. If your installation is distributed and you have multiple vCenter installation, you might want to look at this vm better, and separate the integration service from this vm, and have it run on the vCenter service vm.
  • Web client service run one primary java process, and 2 vCPU’s is good for this machine and it’s performance is vital for your sys-admins.
  • vCenter service vm has 2 primary java process running, and 4vCPU vm should be able to serve those processes well.
  • Update manager vm can have 1 vCPU, as it’s workload is not affecting your web client users.
  1. Assign more memory to the java processes than default settings are, and based on my experience, much higher than the guidelines you find on this subject from VMware… I had huge improvements when I did this.  First assign  6GB  RAM to the web client vm, and 24GB to the vCenter vm. Then adjust the java memory settings.
    There are 2 settings to take note of, initial size and max size. Set both values to the same amount. For the web client web service, set it to 2GB, in file; “C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\bin\service\conf\wrapper.conf”

and on the vCenter service vm set the 2 main java processes service
“C:\Program Files\VMware\Infrastructure\tomcat\conf\wrapper.conf”
Under “Java Additional Parameters””-Xmx16024M”-Xss16024K

And  it might be beneficial for your to also change this in file “C:\Program Files\VMware\Infrastructure\Profile-Driven Storage\conf\wrapper.conf”

# Initial Java Heap Size (in MB)

# Maximum Java Heap Size (in MB)

  1. Run the vCenter service vm and the database server vm on the same host, and the other main 3 vm’s on a different host. The database vm and the vCenter service vm have a lot to talk about so to speak, so placing them on the same host helps in regard of both network traffic and latency concerns.
  2. Publish the web client service to your admins via an Citrix Netscaler appliance You get a lot of benefits from this. Just remember to publish both port 9443 for the web service, and port 7331 (or 7343 if you have vSphere 5.5 Update 2) for console access.
    To name a few of the benefits of using Netscaler in front of the web client service vm:
  • SSL off-loading from the web service vm. Even if you use the default SSL port 9443 to the web client service, you terminate and multiplex the TCP sessions on the Netscaler and therefore you get only few sessions to the web service. SSL load on the web service vm is moved to the SSL chips on the Netscaler so you have less CPU load on the vm.
  • Use http to https redirection . you can tell your sys-admins to browse directly to the DNS name you set. Like for example and the don’t have to worry about having to remember port 9443, or type https, as you set up the redirection and port translation on the Netscaler virtual server and service.
  • Use in memory cache in Netscaler to offload from the web service vm.
  • Use acl’s to control what subnets or ip’s can access the web client service.

I hope this guide can help your vSphere Web Client to run faster, – and bear in mind, the settings on Java memory sized might not be the best options for your setup, but this is what I have done to tune my installation to run more smoothly.


I use Veeam for backups of my environment, and it generates a quite amount of load on the vCenter server vm, so give it plenty of resources .