How does VMware vSphere 5.5 compare to the competition?

VMworld 2013.png

Yesterday VMware release vSphere 5.5 which includes many new features and enhancements, again raising the bar for the competition.

But how does VMware vSphere 5.5 relate to Microsoft Server 2012 Hyper-V, Citrix XenServer 6.2 or RedHat RHEV 3.2? Check out our new Enterprise Hypervisor comparison in which I added the new vSphere 5.5 features and enhancements.

vSphere 5 memory management explained (part 2)

As I said earlier this week, VMware memory management is still a topic which a lot of VMware administrators don’t understand.

Tuesday I discussed the virtual machine memory allocation graphs. Today we will deal with  VMware vSphere uses transparent page sharing (TPS), memory compression, host swapping and ballooning.

VMware ESXi, a crucial component of VMware vSphere 5.0, is a hypervisor designed to efficiently manage hardware resources including CPU, memory, storage, and network among multiple, concurrent virtual machines. In this article I will  describes the basic memory management concepts in VMware ESXi and describe the performance impact of these options.

ESXi uses several innovative techniques to reclaim virtual machine memory, which are:

  • Transparent page sharing (TPS)—reclaims memory by removing redundant pages with identical content;
  • Ballooning—reclaims memory by artificially increasing the memory pressure inside the guest;
  • Hypervisor swapping—reclaims memory by having ESXi directly swap out the virtual machine’s memory;
  • Memory compression—reclaims memory by compressing the pages that need to be swapped out.

So how does it work.


vSphere 5 memory management explained (part 1)

VMware memory management is still a topic which a lot of VMware administrators don’t understand. I often come across people who have no idea when VMware vSphere uses transparent page sharing (TPS), memory compression, host swapping or ballooning. They even mention disabling or removing the ballooning driver without knowing why.  I also meet a lot of VMware administrators having trouble explaining the virtual machine memory allocation graphs.

Let’s start with the last one.

We all know the nice graphs with all different colors, 9 different memory classifications and reservations and limits.

This screen shows the following values:Memory alloc

Host memory

  • Consumed memory;
  • Overhead consumption;

Guest memory

  • Private memory;
  • Shared memory;
  • Swapped memory;
  • Compressed memory;
  • Ballooned memory;
  • Unaccessed memory;
  • Active memory.


Enterprise Hypervisor feature comparison (RHEV added)

Back by popular demand, the Enterprise Hypervisor feature comparison.

After the release of our latest comparison I’ve received a lot of requests to include RedHat’s RHEV to the comparison. Although I’ve never encountered it in enterprise environments, I decided to add it as a service to our readers.

I based the RedHat features on their 3.1 version which is in beta right now. This is because I’ve limited knowledge of the product and I received an updated comparison from one of our readers based on this version.

I hope you find the new Enterprise Hypervisor comparison useful and feel free to contact us when you have feedback for us to improve the list.

Demo of the new VMware vSphere Multi-Hypervisor Management

VMworld 2012 San Francisco / Barcelona



As we heard in the keynote this morning you can now manage multiple hypervisors with VMware vCenter.
Eric Sloof visited the VMware booth and got a demo.

NEW Enterprise Hypervisor comparison

Two weeks ago VMware released the new version of their vSphere hypervisor, so it’s time to update our Enterprise Hypervisor comparison. It very impressive to see how quick VMware has reacted to the Hyper-V 3 announcements and has taken most of the wind out of the Microsoft sails.

I hope you find the new Enterprise Hypervisor comparison useful and feel free to contact us when you have feedback for us to improve the list.
The information on Microsoft Server 2012 Hyper-V features is very inconsistent, many different values out there.

In this version I added 10 new criteria. Many of these criteria should, in my opinion, be available in hypervisors suitable for enterprise environments.

You can find the new and improved Enterprise Hypervisor comparison here.

Last update: August 27th, 2013

VMware vSphere 5.1 available NOW

At VMworld 2012 VMware announced the newest version of vSphere, version 5.1.
Today this new version is finally available! So, in DOLBY DIGITAL available NOW ;-)

You can download VMware vSphere 5.1 from the VMware download site.

If you’re wondering what’s new in vSphere 5.1, check out Alex’s article from August 27th.

One of the major changes is the disappearance of the vRAM limit for VMware vSphere 5.x.

Updated Enterprise hypervisor comparison

During the last few years we published several Enterprise Hypervisor comparisons and we got very positive comments and feedback on it. With the release of vSphere 5, XenServer 6 and a service pack for Hyper-V it was time for an update.

It very interesting to see how some of the products have improved over the years and how the three major manufacturers look at each other and copy features. But you can’t trust all manufacturers by just a simple green checkbox. Some claimed features need third party add-ons, aren’t suitable for production workloads or are only supported on a limited set of operating systems. You have to investigate further and I hope I’ve done most of that work for you with this new enterprise hypervisor comparison.


New vSphere client for iPad released



Today at VMworld Europe 2011 Partner day, VMware released a new version of their vSphere client for the iPad. Just in time for the real start of VMworld Europe 2011 in Copenhagen.

New in version 1.2 of the vSphere client for the iPad is:

  • vMotion. The feature is available via Host & VM action menus. Virtual machines can also be two-finger flicked/dragged from the Host detail view to enter vMotion mode;
  • Ability to email vMotion validation error details to others;
  • View task progress reporting on VM cards;
  • Ability to refresh vCenter host list;
  • Support for ESX 3.5;
  • Support for VMware vSphere 5.0.

Of course the vSphere client for iPad requires iOS 4.0 and vCMA, also version 1.2 in this case.

VMware vSphere 4.1 and View 4.5 launch news

Last Thursday we attended the VMware vSphere 4.1, vCenter 4.1 and View 4.5 launch event at Amerongen (NL).

We already brought you all news regarding vSphere and vCenter 4.1 and View 4.5 but we heard some interesting thing we would like to share with you.

VMware View 4.5

With VMware View 4.5 VMware has changed the names for some product related features. Let us welcome Local mode, persistent disk, dedicated and floating pools.

View Client with Offline Desktop -> View Client with Local Mode
User Data Disk -> Persistent Disk
Persistent Desktop pool -> Dedicated Desktop pool
Non-persistent Desktop pool -> Floating Desktop pool


New Enterprise Hypervisor comparison


Last year we published an Enterprise Hypervisor comparison and we got very positive comments and feedback on it.

During the last few weeks I received many update requests so I decided to update the old hypervisor comparison but this time I changed the setup a bit.


  • No beta or pre-release versions are used. In the last document we also compared Hyper-V R2 beta which wasn’t officially released.
    This time all software is available and no features are subject to change due to beta-test, etc.;
  • The versions used are the platinum/ultimate/fully-featured versions of the hypervisors. Product features can be limited by lower license versions;
  • No free versions have been used in this comparison.


VMware ALERT: VMware View Composer 2.0.x is not supported in a vSphere vCenter Server 4.1

There was an issue discovered earlier today that prevents View Composer from working with vSphere 4.1.

Because of that VMware View Composer 2.0.x is not supported in a vSphere vCenter Server 4.1 managed environment as vSphere vCenter Server 4.1 requires a 64 bit operating system and VMware View Composer does not support 64 bit operating systems.

VMware View 4.0.x customers who use View Composer should not upgrade to vSphere vCenter Server 4.1 at this time. The upcoming VMware View 4.5 will be supported on VMware vSphere 4.1.

Check out this VMware KB article for more information.

VMware apologizes for any inconvenience this may have caused you. If you know how to spread the word to your friends and colleagues, please do so.

How to: Upgrade to vSphere 4.1

With yesterdays release of vSphere 4.1 comes the challenge to upgrade your existing installation to this new version. Because I have been testing the beta for a while now, I couldn´t wait to try it in our new testing environment.


However, there are a few caveats:

  • VMware released a KB article with the supported upgrade methods for ESX(i) 3.0.x, 3.5 and 4 full, embedded or installable;
  • Do NOT upgrade vCenter server to version 4.1 if you are using VMware View Composer 2.0.x. Check out this VMware KB article for more information.

Before you start the upgrade process, back-up the vCenter- and Update Manager databases.

After downloading the needed ISO´s, I started with the upgrade of the vCenter server.

But first of all, I had to uninstall all incompatible vCenter components, in this case Guided Consolidation 4.0.

When this is done, it´s time to update the vCenter server.


Pages: 1 2 3 4

VMware vSphere 4.1 released

A few minutes ago VMware has released the new version of VMware vSphere, version 4.1.

This new vSphere version contains 150 new features and has improved scalability, memory management, DRS, etc.

Besides all the new features the greatest news is that vSphere 4.1 is the last version which will have an ESX version (with service console). As of the next version there will only be two versions, ESXi embedded and installable.

Below you will find a detailed list of features that are include with the vSphere 4.1 release:

  • Scalable vMotion;
  • Wide VM NUMA;
  • Storage I/O can be shaped by I/O shares and limits through the new Storage I/O Control quality of service (QoS) feature;
  • Network I/O can be partitioned through a new QoS engine that distinguish between virtual machines, vMotion, Fault Tolerance (FT) and IP storage traffic;
  • Memory compression will allow to compress RAM pages instead of swapping on disk, improving virtual machines performance;
  • Distributed Resource Scheduling (DRS) now can follow affinity rules defining a subset of hosts where a virtual machine can be placed;
  • Virtual sockets can now have multiple virtual CPUs. Each virtual CPU will appear as a single core in the guest operating system;
  • Support vCenter on 64 bit operating systems only;


vSphere network troubleshooting

During the last month I have been very busy building a new infrastructure at a client site. I’m responsible for the overall technical solution and the basis, a VMware vSphere infrastructure build on five Dell PowerEdge R805′s, Dell EqualLogic PS5000 and 6000 storage and Cisco switches for LAN, DMZ and IP storage networking.

Just before the customer initiated their functional test period we discovered that the overall Windows network  performance was slow. We did several test like copying an 8 GB file from local vmdk to local vmdk and VM to VM and found that the storage performance was no issue but the network performance was very slow.

In the last few years that I have been working with virtualization I have always been a fan of a static network configuration. Meaning, when I configure ESX networking I like my network interfaces and physical switch ports to be configured at 1000MB full duplex if the switch/network interface combination allows it. The idea is that if you purchase gigabit network interfaces and switches you know the maximum speeds. So you configure it to run at it’s maximum capacity, eliminating overhead and using as much bandwidth as possible purely for data transfer.

So when we experienced slow network performance I had a colleague check the Cisco LAN switches for errors, drops, packet loss or any other flaw which might indicate a speed or duplex mismatch. None were found so I assumed that the network configuration was not the issue. But as we know by now, ‘Assumption is the mother of all fuck-ups!‘.


vSphere 4: 9 months later

May 21th VMware released their new flagship product VMware vSphere 4 which should bring us tons of new features and performance improvements.

But how is the vSphere experience almost 9 months later?

Starting with the installation and setup experience, my personal experience with vSphere is very good. During the installation and setup of VMware ESX or ESXi 3.x I experienced a lot of issues like BIOS settings causing HA issues, HA issues when changing the ESX IP addresses, Problems with VMware Update Manager and faulty HP USB sticks. We even created a HA checklist for you to easily address HA issues.

Once up and running ESX(i) 3.x ran fine with the occasional HA error which 99% of the time could be fixed by reconfiguring HA from Virtual Center.

Now with vSphere the installation and setup is simple, error free and straight forward. Setup HA in the cluster properties wait for all progress indicators to reach 100% and you’re done.


Best practices XenApp on vSphere

Based on the real life results when virtualizing XenApp I thought it was about time to summarize some of the best practices for virtualizing XenApp servers.

Why we DO want to virtualize XenApp?

  1. For server consolidation:  vSphere enables scale up XenApp deployments;
  2. For mixing server editions: 32-bit and 64-bit XenApp VMs can coexist;
  3. For management: Better management through flexibility & isolation think about Change Management and VMware DRS;
  4. For high availability and disaster recovery: VMware HA and vCenter Site Recovery Manager;
  5. For less costs for server hardware, maintenance contracts, power, cooling, floor and rackspace.

Virtualizing XenApp servers is very complex. There are a lot more layers involved, like the type of hardware, the capabilities of the processor, the performance of the shared storage, the hypervisor used, the specific settings per hypervisor, operating system settings in a virtual environment, the XenApp settings in a virtual environment, the Workspace management settings in a virtual environment etc, etc.

In the following sections I tried to summarize some of the best practices we use in our projects:


Hyper-V R2 vs vSphere: A feature comparison

At the end of May of this year we wrote a article concerning Hypervisor comparisons and we got a lot of positive feedback on it. The downside to that is that people want an update as soon as one of the companies launches a new version of its product, and who can blame them. However the issue is that this takes a lot of research and because of that, a lot of time. And because two of us are ill and in bed wearing a sombrero ;-) and the other two are extremely busy, we simply don’t have that time right now.

However, Scott Lowe has written an excellent article on the feature comparison between VMware vSphere 4 and Microsoft’s Hyper-V R2 which is a must read for everybody who’s advising customers on hypervisors.

It’s not as extensive as the Enterprise hypervisor comparison we did earlier but it gives you a good image how both products relate to each other. To extend the picture I added a list of supported operating systems.