How does VMware vSphere 5.5 compare to the competition?
Yesterday VMware release vSphere 5.5 which includes many new features and enhancements, again raising the bar for the competition.
But how does VMware vSphere 5.5 relate to Microsoft Server 2012 Hyper-V, Citrix XenServer 6.2 or RedHat RHEV 3.2? Check out our new Enterprise Hypervisor comparison in which I added the new vSphere 5.5 features and enhancements.
As I said earlier this week, VMware memory management is still a topic which a lot of VMware administrators don’t understand.
Tuesday I discussed the virtual machine memory allocation graphs. Today we will deal with VMware vSphere uses transparent page sharing (TPS), memory compression, host swapping and ballooning.
VMware ESXi, a crucial component of VMware vSphere 5.0, is a hypervisor designed to efficiently manage hardware resources including CPU, memory, storage, and network among multiple, concurrent virtual machines. In this article I will describes the basic memory management concepts in VMware ESXi and describe the performance impact of these options.
ESXi uses several innovative techniques to reclaim virtual machine memory, which are:
Transparent page sharing (TPS)—reclaims memory by removing redundant pages with identical content;
Ballooning—reclaims memory by artificially increasing the memory pressure inside the guest;
Hypervisor swapping—reclaims memory by having ESXi directly swap out the virtual machine’s memory;
Memory compression—reclaims memory by compressing the pages that need to be swapped out.
VMware memory management is still a topic which a lot of VMware administrators don’t understand. I often come across people who have no idea when VMware vSphere uses transparent page sharing (TPS), memory compression, host swapping or ballooning. They even mention disabling or removing the ballooning driver without knowing why. I also meet a lot of VMware administrators having trouble explaining the virtual machine memory allocation graphs.
Let’s start with the last one.
We all know the nice graphs with all different colors, 9 different memory classifications and reservations and limits.
Back by popular demand, the Enterprise Hypervisor feature comparison.
After the release of our latest comparison I’ve received a lot of requests to include RedHat’s RHEV to the comparison. Although I’ve never encountered it in enterprise environments, I decided to add it as a service to our readers.
I based the RedHat features on their 3.1 version which is in beta right now. This is because I’ve limited knowledge of the product and I received an updated comparison from one of our readers based on this version.
Two weeks ago VMware released the new version of their vSphere hypervisor, so it’s time to update our Enterprise Hypervisor comparison. It very impressive to see how quick VMware has reacted to the Hyper-V 3 announcements and has taken most of the wind out of the Microsoft sails.
I hope you find the new Enterprise Hypervisor comparison useful and feel free to contact us when you have feedback for us to improve the list.
The information on Microsoft Server 2012 Hyper-V features is very inconsistent, many different values out there.
In this version I added 10 new criteria. Many of these criteria should, in my opinion, be available in hypervisors suitable for enterprise environments.
During the last few years we published several Enterprise Hypervisor comparisons and we got very positive comments and feedback on it. With the release of vSphere 5, XenServer 6 and a service pack for Hyper-V it was time for an update.
It very interesting to see how some of the products have improved over the years and how the three major manufacturers look at each other and copy features. But you can’t trust all manufacturers by just a simple green checkbox. Some claimed features need third party add-ons, aren’t suitable for production workloads or are only supported on a limited set of operating systems. You have to investigate further and I hope I’ve done most of that work for you with this new enterprise hypervisor comparison.
Last year we published an Enterprise Hypervisor comparison and we got very positive comments and feedback on it.
During the last few weeks I received many update requests so I decided to update the old hypervisor comparison but this time I changed the setup a bit.
No beta or pre-release versions are used. In the last document we also compared Hyper-V R2 beta which wasn’t officially released.
This time all software is available and no features are subject to change due to beta-test, etc.;
The versions used are the platinum/ultimate/fully-featured versions of the hypervisors. Product features can be limited by lower license versions;
No free versions have been used in this comparison.
VMware ALERT: VMware View Composer 2.0.x is not supported in a vSphere vCenter Server 4.1
There was an issue discovered earlier today that prevents View Composer from working with vSphere 4.1.
Because of that VMware View Composer 2.0.x is not supported in a vSphere vCenter Server 4.1 managed environment as vSphere vCenter Server 4.1 requires a 64 bit operating system and VMware View Composer does not support 64 bit operating systems.
VMware View 4.0.x customers who use View Composer should not upgrade to vSphere vCenter Server 4.1 at this time. The upcoming VMware View 4.5 will be supported on VMware vSphere 4.1.
With yesterdays release of vSphere 4.1 comes the challenge to upgrade your existing installation to this new version. Because I have been testing the beta for a while now, I couldn´t wait to try it in our new testing environment.
However, there are a few caveats:
VMware released a KB article with the supported upgrade methods for ESX(i) 3.0.x, 3.5 and 4 full, embedded or installable;
Do NOT upgrade vCenter server to version 4.1 if you are using VMware View Composer 2.0.x. Check out this VMware KB article for more information.
Before you start the upgrade process, back-up the vCenter- and Update Manager databases.
A few minutes ago VMware has released the new version of VMware vSphere, version 4.1.
This new vSphere version contains 150 new features and has improved scalability, memory management, DRS, etc.
Besides all the new features the greatest news is that vSphere 4.1 is the last version which will have an ESX version (with service console). As of the next version there will only be two versions, ESXi embedded and installable.
Below you will find a detailed list of features that are include with the vSphere 4.1 release:
Wide VM NUMA;
Storage I/O can be shaped by I/O shares and limits through the new Storage I/O Control quality of service (QoS) feature;
Network I/O can be partitioned through a new QoS engine that distinguish between virtual machines, vMotion, Fault Tolerance (FT) and IP storage traffic;
Memory compression will allow to compress RAM pages instead of swapping on disk, improving virtual machines performance;
During the last month I have been very busy building a new infrastructure at a client site. I’m responsible for the overall technical solution and the basis, a VMware vSphere infrastructure build on five Dell PowerEdge R805′s, Dell EqualLogic PS5000 and 6000 storage and Cisco switches for LAN, DMZ and IP storage networking.
Just before the customer initiated their functional test period we discovered that the overall Windows network performance was slow. We did several test like copying an 8 GB file from local vmdk to local vmdk and VM to VM and found that the storage performance was no issue but the network performance was very slow.
In the last few years that I have been working with virtualization I have always been a fan of a static network configuration. Meaning, when I configure ESX networking I like my network interfaces and physical switch ports to be configured at 1000MB full duplex if the switch/network interface combination allows it. The idea is that if you purchase gigabit network interfaces and switches you know the maximum speeds. So you configure it to run at it’s maximum capacity, eliminating overhead and using as much bandwidth as possible purely for data transfer.
So when we experienced slow network performance I had a colleague check the Cisco LAN switches for errors, drops, packet loss or any other flaw which might indicate a speed or duplex mismatch. None were found so I assumed that the network configuration was not the issue. But as we know by now, ‘Assumption is the mother of all fuck-ups!‘.
Based on the real life results when virtualizing XenApp I thought it was about time to summarize some of the best practices for virtualizing XenApp servers.
Why we DO want to virtualize XenApp?
For server consolidation: vSphere enables scale up XenApp deployments;
For mixing server editions: 32-bit and 64-bit XenApp VMs can coexist;
For management: Better management through flexibility & isolation think about Change Management and VMware DRS;
For high availability and disaster recovery: VMware HA and vCenter Site Recovery Manager;
For less costs for server hardware, maintenance contracts, power, cooling, floor and rackspace.
Virtualizing XenApp servers is very complex. There are a lot more layers involved, like the type of hardware, the capabilities of the processor, the performance of the shared storage, the hypervisor used, the specific settings per hypervisor, operating system settings in a virtual environment, the XenApp settings in a virtual environment, the Workspace management settings in a virtual environment etc, etc.
In the following sections I tried to summarize some of the best practices we use in our projects:
At the end of May of this year we wrote a article concerning Hypervisor comparisons and we got a lot of positive feedback on it. The downside to that is that people want an update as soon as one of the companies launches a new version of its product, and who can blame them. However the issue is that this takes a lot of research and because of that, a lot of time. And because two of us are ill and in bed wearing a sombrero and the other two are extremely busy, we simply don’t have that time right now.
It’s not as extensive as the Enterprise hypervisor comparison we did earlier but it gives you a good image how both products relate to each other. To extend the picture I added a list of supported operating systems.