VMware announces VMware Horizon 6

VMware to the power of 6” That’s how, a few minutes ago, VMware announced VMware Horizon 6, the next step in VMware VDI solutions. It is an integrated solution that delivers published applications and desktops on a single platform. Horizon 6 is a complete desktop solution with centralized management of any type of enterprise application and desktop, including physical desktops and laptops, virtual desktops and applications and employee-owned PCs.

A complete solution with capabilities in application delivery, datacenter-to-device management, storage optimization and flexible hybrid delivery in VMware Horizon 6 make enterprise desktops and applications easier and more cost-effective to deliver, protect and manage. Multiple access points such as laptops, tablets, smartphones and an array of other employee-owned devices are putting pressure on IT departments to deliver a high level of service and access without compromise.

VMware Horizon 6 introduces new capabilities that are integrated into a single solution that empower IT with a streamlined approach for managing Windows applications and desktops. With Horizon 6, enterprise applications and Windows operating systems are centrally managed so updates can be made in an agile and predictable manner. In addition, Horizon 6 enables entire desktops or just applications to be delivered in a flexible manner to end-users.horizon6a.pngNew capabilities in VMware Horizon 6 include:

  • Published applications and virtual desktops delivered through a single platform
    VMware Horizon 6 offers streamlined management, end-user entitlement, and quick delivery of published Windows applications, RDS-based desktops and virtual desktops across devices and locations. The new capabilities are built on a single platform that is an extension of VMware Horizon View.
  • A unified workspace for simplified access
    With VMware Horizon 6, end-users can access all applications and desktops from a single unified workspace. The unified workspace supports the delivery of virtualized applications hosted in the datacenter or local on the device, web and SaaS applications, RDS hosted applications, and published applications from third party platforms, such as Citrix XenApp, with a single sign-on experience.
  • Storage Optimization with VMware Virtual SAN and delivery from the Software-Defined Data Center
    VMware Horizon 6 is optimized for the Software-Defined Data Center. The solution provides integrated management of VMware Virtual SAN that can significantly reduce the cost of storage for virtual desktops by using local storage. With this innovation, the capital cost of virtual desktops with Horizon 6 can be similar to physical desktops.

vdi-costs-horizon-6.png

  • Closed-Loop Management and Automation
    VMware Horizon 6 offers new capabilities for end-to-end visibility and automation from datacenter-to-device. The new VMware vCenter Operations for View provides health and risk monitoring, proactive end-user experience monitoring and deep diagnostics from datacenter-to-device all within a single console. Horizon 6 also supports automation and self-service, allowing IT to provide line-of-business users with the ability to request desktops and applications by using built-in workflows and automated infrastructure provisioning. This closed-loop management and automation is integrated with the vCloud Automation Center management console, making it easier for customers with vCloud Suite to get started with Horizon 6.
  • Central image management of virtual, physical and employee-owned PCs
    VMware offers centralized image management for virtual, physical and employee-owned PCs from a single, integrated solution. Using the updated VMware Mirage, IT administrators can design a single desktop with the required operating system and applications, and deliver it to end-users in a department or entire organization based on end-user needs.
  • Hybrid Cloud Delivery
    VMware Horizon 6 introduces a new client that seamlessly connects to virtual desktops and applications running in an on-premise cloud, a service provider partner cloud, or through VMware vCloud Hybrid Service with the same, high performance end-user experience. This flexibility gives customers the ability to deploy Horizon 6 via the hybrid cloud — balancing between business-owned and public cloud-based infrastructure to best satisfy their needs.

Three new editions of VMware Horizon will be available to customers:

  • Horizon Standard Edition: Delivers simple, high-performance, VDI-based virtual desktops with a great user experience.
  • Horizon Advanced Edition: Offers the lowest cost solution for published and virtual applications and desktops using optimized storage from VMware Virtual SAN, central image management and a unified workspace for managing and delivering all applications and desktops.
  • Horizon Enterprise Edition: Delivers a cloud-ready solution for desktops and applications with advanced cloud-like automation and management capabilities for hybrid cloud flexibility.

horizon6c.png

VMware Horizon 6 is expected to be available in Q2 2014 and is licensed per named user or per concurrent user with prices starting at $250. For more information, visit the VMware Horizon product page.

VMware vExpert 2014 !

VMGuru is very proud to announce that for the sixth year in a row, we can proudly put the VMware vExpert logo on our site.

Alex, Edwin and myself have been awarded the vExpert award 2014 for our contributions to the VMware virtualization community. This is an acknowledgement of our work and we will continue to share our knowledge and expertise with others.

vExpert2013.png

The vExpert program is a way for VMware to acknowledge and help those who ‘go the extra mile’ and give back to the VMware user community by sharing their expertise and time. vExperts are bloggers, book authors, VMUG leaders, event organizers, speakers, tool builders, forum leaders, and others who share their virtualization expertise.

This year there are 754 vExperts worldwide. I’m proud, humbled and honored to be included and I’m looking forward to another great year!

Congratulations to all fellow vExperts!

Special thanks go out to John Troyer, who had to endure our abuse but in spite of that spent very much time in the vExpert program. Thanks John!

VMware Horizon View graphics – vSGA vs vDGA

VDI_Nvidia.pngAs VDI solutions become more and more mainstream for standard office environments, a new challenge appears. We can swap the bulky desktop and replace it with a small thin client, delivering Windows desktops from the datacenter. Enabling a more flexible way of working with free seating, working at home. But how about those ‘special’ users who always get a ‘special’ workstation with high end graphic cards, dual monitor setup working with AutoCAD, Bentley MicroStation, etc? Can we deliver the high end graphics needed for these applications to a VDI desktop also enabling the VDI benefits for these users?

I’ve been researching the possibilities for these users because one of our larger customers would like to do just that. In this post I would like to share some of the results.

Most VDI solutions on the market today, like VMware Horizon View and Citrix XenDesktop, all offer advanced 3D capabilities. VMware was the first company to virtualize 3D graphics with VMware Workstation and Fusion and with VMware vSphere 5.1 introduced this 3D technology in vSphere to be used in VDI use cases. As of 2011 VMware has been working closely with Nvidia to deliver high-end virtual workstations with 3D graphics support in VMware Horizon View by using their Quadro graphics adapters.

When VMware released View 5.0 they introduced SVGA and software 3D rendering which was a huge improvement for VDI graphics and a boost for VDI utilization. With the release of VMware Horizon View 5.2 VMware announced two new graphics features, vSGA and vDGA.

CPU Rendering

VMware introduced software 3D rendering in View 5.0 primarily to enable Windows Aero desktops and applications requiring 3D without requiring a physical GPU. The main advantage of software 3D rendering is that it can run on any server hardware, no special graphics cards or server hardware is required. Because software 3D rendering is in essence CPU rendering [duh] this graphics mode impacts the VDI density on a server. There are no specific benchmarks on the software 3D rendering but because rendering is done on a (shared) CPU and not on a dedicated GPU is not suitable for real high-end graphical applications.

vSGAvSGA.jpg

Virtual Shared Graphics Acceleration (vSGA) uses the physical GPU’s installed locally in each vSphere host to provide hardware accelerated 3D graphics to virtual desktops. vSGA was introduced in VMware Horizon View 5.2 and it offers truly high performance graphics with maximum compatibility and portability. With this feature we can now offer VDI for some of those users with a ‘special’ workstation with high end graphic cards, dual monitor setup working with AutoCAD, Bentley MicroStation, etc.

vSGA allows you the ability to provision multiple VDI desktops to a single or multiple GPU’s. Graphics cards are presented to the VDI virtual machine as a VMware SVGA 3D graphics driver and the graphics processes are handled by an ESXi driver. The VMware SVGA 3D graphics driver is supported on Windows 7 and 8 virtual desktops for 2D and 3D and is used for both software 3D rendering and vSGA and provides support for DirectX v9 and OpenGL 2.1 applications. Graphics resources are reserved on a first come first serve basis so sizing and capacity is important to consider. vSGA is a great solution for users that require higher than normal graphics needs, rendering 1080p video, OpenGL, DirectX, etc.

Because the VMware SVGA 3D driver is used for both software 2D/3D rendering and vSGA a VDI virtual machine can dynamically switch between software or hardware acceleration without the need to reconfigure the virtual machine allowing vMotion even when providing hardware-accelerated graphics using vSGA.

vSGA supported graphics adapters (03-2014):

  • Nvidia GRID K1
  • NvidiaGRID K2
  • Nvidia Quadro 4000
  • Nvidia Quadro 5000
  • Nvidia Quadro 6000
  • Nvidia Tesla M2070Q

(notice the ‘missing K’ with the Quadro adapters, Nvidia K4000, K5000, K6000 are not supported)

vDGAvDGA.jpg

Virtual Direct Graphics Acceleration (vDGA) was introduced with VMware Horizon View 5.2 as a Tech Preview and is fully supported with VMware Horizon View 5.3. vDGA delivers real high-end Workstation Class 3D graphics for use cases where a dedicated GPU is needed and offers a true graphical workstation replacement for high performance GPU computing. Assigning a dedicated Nvidia GPU to the VDI virtual machine reserves the entire GPU to that desktop and enables for CUDA and OpenCL compute. vDGA supported graphics adapters are physically installed in the vSphere host and are assigned to virtual machines using VMware DirectPath I/O.

The number of VDI virtual machines per host is limited to the number of Nvidia graphics adapters in a vSphere host.

Because VMware DirectPath I/O is used vMotion, DRS, and HA are not supported with vDGA. Besides that, vDGA uses the Nvidia graphics drivers instead of the VMware SVGA 3D driver, so a VDI virtual machine cannot dynamically switch between software or hardware acceleration so NO vMotion. And last but not least, because of the nature of the configuration vDGA and Direct I/O assignment it is not a candidate for automated deployment using Horizon View Composer.

vDGA supported graphics adapters (03-2014):

  • Nvidia GRID K1
  • Nvidia GRID K2
  • Nvidia Quadro K2000
  • Nvidia Quadro K4000
  • Nvidia Quadro K5000
  • Nvidia Quadro K6000
  • Nvidia Quadro 1000M
  • Nvidia Quadro 2000
  • Nvidia Quadro 3000M
  • Nvidia Quadro 4000
  • Nvidia Quadro 5000
  • Nvidia Quadro 6000
  • Nvidia Tesla M2070Q

Conclusion

vSGA and vDGA are great new features which offer the 3D graphics and video in VMware Horizon View. It further expands the use cases and users that IT can service with a VDI virtual desktops. In addition to expanding the target use cases, offering 3D capabilities will give users a more graphically rich experience in a VDI user desktop. But there are some caveats to consider.

Using vDGA will give you a true graphical workstation replacement for high performance GPU computing but a a cost. Because of the dedicated pinning of VDI virtual machines to a GPU the desktop density will be low and vSphere and Horizon View features like HA, DRS, vMotion and linked clones cannot be used. I’ve seen and played with this vDGA setup at VMworld 2013 and I was really impressed, 3D gaming, AutoCAD, NO problem!

vSGA will give you a better desktop density and you can still use VMware HA, DRS, vMotion and Horizon View Composer linked clones but at another cost. ‘Limited’ DirectX and OpenGL support, no CUDA support but far better graphics performance than software 3D rendering. vSGA is a great mix/trade of between software 3D rendering and vDGA.

So every solution has it pro’s and con’s but the real question is will you notice? I would advice you to do a Proof of Concept (PoC) to find out which solution fits your needs.

For more information on hardware accelerated 3D graphics please refer to the Graphics Acceleration in VMware Horizon View Virtual Desktops white paper.

View Graphics.JPG

Software 3D rendering vSGA vDGA
Use case Task worker Knowledge worker/ Power user Workstation user
Mode Software shared Hardware shared Hardware dedicated
Dedicated hardware No Yes Yes
Desktop density Very high High Low
DirectX No Yes (9 only) Yes (9, 10, 11)
OpenGL No Yes (2.1 only) Yes (2.1, 3.x, 4.1x)
CUDA No No Yes
Video encode No No Yes
Driver VMware SVGA 3D graphics driver VMware SVGA 3D graphics driver Specific Nvidia client driver
vMotion Yes Yes No
HA Yes Yes No
DRS Yes Yes No
Linked clones Yes Yes No

VMware Virtual SAN (vSAN) is available now!

Today VMware announced the general availability of VMware Virtual SAN, a new and radically simple storage solution optimized for virtual environments. This was done during a VMware Virtual SAN online event  of which you can view the replay here. It includes a demonstration of the product, experiences of  beta customers, and highlighted performance and scalability details.

For those of you who don’t know Virtual SAN, Virtual SAN is an object based storage system and a platform for VM Storage Policies that aims to simplify virtual machine storage placement decisions for vSphere administrators. It leverages the local storage from a number of ESXi hosts which are part of a cluster and creates a

distributed vsanDatastore. Virtual SAN is fully integrated with vSphere so it can be used for VM placement, and of course supports all the core vSphere technologies like vMotion, DRS and vSphere HA.

vSAN scale.png

Scalability

VMware Virtual SAN scales up to 32 nodes in a cluster allowing for linear scalability of performance to 2 million IOPS on read-only workloads and 640,000 IOPS on mixed workloads. 

You will need at least 3 ESXi hosts to deploy Virtual SAN and you will also need at least one hard disk per host and at least one SSD per host. There are a couple of best practices I found online:

  1. VMware recommends at least a 1:10 ratio of SSD vs HDD.
    When your performance demands increase, you may need to up this ratio 2:10 or 3:10.
  2. VMware recommends as a best practice that all hosts in the VSAN cluster be configured similarly if not identically from a storage and compute perspective.

The choice of SSD is essential to Virtual SAN performance. VMware is providing a HCL which will grade SSDs on performance.

Because you can vary the SSD vs HDD ratio you can simply scale a vSphere cluster with Virtual SAN for capacity or performance.

Flexibly-Configure-for-Performance-and-Capacity.png

Versions & licensing

Staying true to the value proposition of simplicity, VMware uses a per socket based pricing model with no limits on scalability, performance or capacity that make forecasting and budgeting significantly easier without impacting hardware components selection and node configurations.

VMware Virtual SAN is available in three editions/bundles.

Virtual-SAN-5.5-Pricing-Packaging.png

All editions feature the complete set of Virtual SAN capabilities – data persistency, read/write caching, storage policy based management, etc. – and include the vSphere Distributed Switch. This means that customers can take advantage of simplified network management of vSphere Distributed Switch for their Virtual SAN storage regardless of the underlying vSphere edition they use. Data services such as snapshots, clones, linked-clones and replication are available directly through vSphere, and are already available with every vSphere edition (Essentials Plus and above).

For customers seeking to complete their storage solution with backup and recovery capabilities, VMware is offering Virtual SAN with Data Protection. A promotional bundle available for a limited time, it brings together Virtual SAN with vSphere Data Protection Advanced, VMware’s simple, efficient, and robust backup product for vSphere environments.

Virtual-SAN-Launch-Promotions.png

The VMware Virtual SAN Design and Sizing Guide can be downloaded here.

If you want a testdrive with VMware Virtual SAN, you can visit the free Hands-on Lab (HOL) which enables you to play and explore all you want.

vSphere 5.5 Update 1 which includes Virtual SAN can be downloaded here.

Please vote VMGuru.nl #1 vBlog 2014!

Vote.pngLast year VMGuru.nl finished #31 overall and the #4 Independent blog in the top VMware/virtualization blogs 2013. Now, it is time again to vote the top vBlogs for 2014.

Please help us to increase our score and vote VMGuru.nl

(Overall & Independent)

As in last years poll you can vote in special categories to help distinguish certain types of blogs. The categories are independent of the general voting so first pick and rank your top 10 overall favorite blogs and then choose your favorite blog in each category (Scripting, Storage, Podcast, Independent).

VMGuru.nl also participates Overall & Independent blogs for 2014!

How to delete an orphaned desktop pool

please_recycle_by_fast_eddie.jpgTime for a new problem in the VMware Horizon View series. After running into problems which forced me to ‘Manually delete protected Horizon View replicas‘ and ‘Link a VMware View desktop to its replica‘, now I encountered an orphaned desktop pool which could not be deleted.

First, What got me into this mess. As I told you last week I was testing a Nvidia Quadro K5000 graphics card when my ESXi whitebox died on me. This also corrupted the one hard drive which contained all my Horizon View desktops. Fortunately the golden images resided on my NFS storage so no harm done, just delete the pools, recreate them and we’re up and running again. Wrong! Because the VDI virtual machines were no longer present, I ended up with an orphaned desktop pool. Similar like you would get when deleting View virtual machines directly from the vCenter client.

When I tried to delete the desktop pools in the Horizon View Administrator I got an error stating internal problems with the Composer server or service.

Composer error.JPG

It’s not much to go on but I checked the View Composer service, Composer logs, Windows domain membership and I even reconfigured Composer in the Horizon View Administrator Server settings. No success. Then I remembered manually deleting the protected Horizon View replicas and I searched for orphaned desktops pools.

I found this VMware KB article: Manually deleting linked clones or stale virtual desktop entries from VMware View Manager and Horizon View (1008658)

This confirmed my suspicion that this had nothing to do with the Composer service but that it was caused by the disappearance of the View virtual machines due to the hard disk corruption. Much like you would get when deleting View virtual machines directly from the vCenter client instead of the proper way, in the Horizon View Administrator console.

To solve this problem and remove the bad entries to be able to delete the desktop pool I had to do the following:

  1. Open up vSphere and connect to vCenter.
  2. Open up the console for the Horizon View Connection Server.
  3. Connect to the Horizon View ADAM database:
    1. Click [Start > Administrative Tools > ADSI Edit].
    2. In the console window, right-click ADSI Edit and click [Connect to].
    3. In the Name field type: [View ADAM Database].
    4. Select [Select or type a Distinguished Name or Naming Context].
    5. In the field below, type [dc=vdi,dc=vmware,dc=int].
      (do not try to be smart and change these to match your own AD domain like I did. This is the distinguished name of the Horizon View ADAM database)
    6. Select [Select or type a domain or server].
    7. In the field below, type [localhost].
    8. Click [OK].
    9. Click [View ADAM Database] to expand.
    10. Click [DC=vdi,dc=vmware,dc=int] to expand.
  4. Locate the GUID of the virtual machine. To locate the GUID of the virtual machine:
    1. Right-click the Connection [View ADAM Database], and click [New > Query].
    2. Under Root of Search, click [Browse] and select the [Servers] organizational unit.
    3. Click [OK].
    4. In the Query String, paste this search string: (&(objectClass=pae-VM)(pae-displayname=VirtualMachineName))
      Where VirtualMachineName is the name of the virtual machine for which you are trying to locate the GUID. You may use * or ? as wildcards to match multiple desktops.
    5. Click [OK] to create the query.
    6. Click the query in the left pane. The virtual machines that match the search are displayed in the right pane.
    7. Record the [GUID] in cn=<GUID>.
  5. Delete the [pae-VM object] from the ADAM database:
    1. Locate the [OU=SERVERS] container.
    2. Locate the corresponding virtual machine’s GUID (from above) in the list which can be sorted in ascending or descending order, choose [Properties] and check the pae-DisplayName attribute to verify the corresponding linked clone virtual machine object.
    3. Delete the pae-VM object.
  6. Check if there are entries under OU=Desktops and OU=Applications in the ADAM database.
  7. Check for entries in both the [OU=Server Groups] and [OU=Applications] and remove both. Removing one entry and not the other from the ADAM database results in the java.lang.nullpointerexception error when attempting to view the pools or desktops inventory in View Manager.

ViewADAM.png

This did the trick. After deleting all references to the old VDI virtual machines and desktop pools, I’ve got a fresh and clean Horizon View Connection Server.

Building a new ESXi whitebox

Whitebox1.jpeg

Unfortunately the whitebox ESXi server I build in June 2011 died on me when testing a Nvidia Quadro K5000 graphics card. So I needed a new ESXi server for my home lab.

I looked at some HP and Dell mini servers but  I decided to build a new VMware ESXi whitebox. Power supply, hard disks and SSD were still fine so I only needed a new motherboard, processor and memory.

In the past I’ve used websites like, ‘Ultimate VMware ESX Whitebox‘ and ‘VM-help.com‘ to find compatible parts but because one no longer exists and the other is pretty outdated I picked the components myself.

Because the Intel i5 processor does not support hyper-threading and comes with less cache I chose a 4 core, 8 threaded, 3,4GHz Intel i7-4770 processor with a LGA1150 socket. It’s not the cheapest processor but this one was available right away, the other Intel i7 processors were out of stock and this could take up to two weeks.

As the basis I needed a LGA1150 socket motherboard and my selection criteria where very simple, 32GB memory, onboard video and as much expansion slots as possible with a mix of PCI and PCIe (x16, x4, x1). As an ASUS fan I chose the ASUS H87-PLUS. It has four DDR3 DIMM-slots which can support up to  32GB of memory, it has onboard video VGA or HMDI and one PCIe 3.0 x16 slot, one PCIe 2.0 x16 slot (x4 mode), two PCIe 2.0 x1 slots and three PCI slots.

I topped it of with 32GB DDR3 1600MHz Corsair VengeanceLP memory in four 8GB modules (CML32GX3M4A1600C10).

whitebox2.jpeg

The total kit list is as follows:

  • Intel i7-4770 processor (8 x 3.4GHz with HT);
  • ASUS H87-PLUS motherboard;
  • Corsair 32GB DDR3-1600 memory (4 x 32GB);
  • Western Digital Caviar Black WD1002FAEX 1TB, SATA-600 hard disk;
  • 256 GB SanDisk Ultra Plus SSD;
  • Intel 82572EI Gigabit Ethernet adapter;
  • Broadcom NetXtreme BCM5705 Gigabit Ethernet adapter;
  • Nvidia Quadro K5000 graphics card;
  • HP midi tower with 750W power supply.

After bolting, screwing and plugging everything together, it was time to install ESXi 5.5, this finished with no issues, so within 1 hour my VMware ESXi whitebox was up and running and I could import my existing lab infrastructure.

But the most important of all, is it any good? It’s great to build an ESXi whitebox but when the performance of all those ‘desktop components’ suck, it’s maybe better to spend a bit more $$. In short, it’s great, performance is comparable to that of enterprise servers with the exception of disk related tasks. The disk performance is good but it’s not great. You just cant compare disk I/O of simple desktop despite the fact it’s a fast, 6Gbps SATA disk.

At the moment I’m running VMware ESXi 5.5 with:

  • vCenter Server Appliance 5.5;
  • vCenter Update Manager 5.5;
  • vCenter Mobile Access appliance;
  • VMware vCenter Operations Manager 5.7
  • Horizon View 5.3 Connection Server;
  • Horizon View 5.3 Composer;
  • Windows Server 2012 R2 Domain Controller;
  • SQL Server;
  • Veeam Backup & Replication Server 7;
  • Windows 7 desktop.

CPU load is as expected very low, 4968MHz on average. The total memory load when running all those virtual machine is 23.8GB.

All things considered I’m very pleased with my ESXi whitebox, performance is good, 32GB of memory gives me enough space to deploy lab VM’s and the money I spend on it is well within my budget (€650,-).

Hint and tips for those of you who want to build their own ESXi whitebox:

  • Research, research, research.
    I still hear people buy incompatible hardware despite the available online resources. Check if your desired configuration has already been build. If not Google is your friend;
  • Do not save on your harddisk.
    If you save on your harddisk you will be sorry very soon so find a fast disk or even add a SSD if your budget allows it.
    If your budget is a problem, save on the processor. As you can see, the load on my processor for instance is very low. Buy a cheaper processor and spend that on a good harddisk.
  • Go for a motherboard which can hold 32GB of memory or more.
    Even if you do not need 32GB right now, shortage of memory probably the first bottleneck you will encounter.

BLAST Windows Apps to your Chromebook

In September 2011 VMware gave us a sneak peek at Project AppBlast and with VMware Horizon View we can use AppBlast technology to access desktops using a HTML5 compatible browse. But as of today we can experience the true power of AppBlast.

Today VMware and Google announced a new service to deliver Windows applications to Google Chromebooks.

Google and VMware today announced that they are working together to make it easier for Chromebook users in the enterprise to access Windows applications and the Windows desktops on their Google ChromeBooks by using VMware’s Horizon desktop as a service (DaaS), which uses VMware‘s HTML5 Blast protocol, it will now be easier for Chromebook users to connect to a traditional Windows experience.

It is possible to remotely access a Windows machine on ChromeOS by using Google’s ownRemote Desktop application or other 3rd party applications but they do not offer the kind of security features that enterprises look for. Another important shortcoming of Chromebooks preventing business use is the ability to run Windows or Windows-based apps. Microsoft Office is still, by far, the leader in office productivity applications, and of course, there are many critical business applications that will only run on Windows systems. So, for Chromebooks to have any hope of becoming a true business device, they must somehow support running these applications that businesses need. Chromebooks were intended to work with web-enabled applications, making Chromebook-type devices more viable, but that day is still far away.

Users will be able to use the new service to access their Windows applications, data and desktops from a web-based application catalog on their Chromebooks. Soon, Chromebook users will also be able to install the service from the Chrome Web Store.

Manually deleting protected Horizon View replicas

computer-trash.jpgTwo weeks ago Sander wrote an article on ‘How to link VMware View desktop to its replica‘.

Unfortunately in my case my server died and because I had to reinstall my Horizon View environment. Because the View desktops were provisioned on another server and on shared storage the  the replicas became orphaned.

During normal operation the View Connection Server creates, manages, and deletes linked clones during View Composer operations. If the Connection Server functions are interrupted, the linked clones create orphaned folders, protected folders and virtual machine objects remaining in the vCenter Server.

The problem now is to remove the replicas because they are protected.

To resolve this issue, run the unprotectentity command to remove the protection from linked clone objects.Run these commands from a command prompt on the vCenter Server from the View Composer directory:
  • 32-bit servers: C:\Program Files\VMware\VMware View Composer
  • 64-bit servers: C:\Program Files (x86)\VMware\VMware View Composer

For View Composer 2.7 and earlier (View 5 and earlier), run the command:

sviconfig -operation=UnprotectEntity -VcUrl=https://<VirtualCenter address>/sdk -Username=<VirtualCenter account name> -Password=<VirtualCenter account password> -InventoryPath=/<Datacenter name>/vm/VMwareViewComposerReplicaFolder/<Replica Name> -Recursive=true

For View Composer 3.0 (View 5.1), run the command:

sviconfig -operation=UnprotectEntity -DsnName=<name of the DSN> -DbUsername=<Composer DSN User Name> -DbPassword=<Composer DSN Password> -VcUrl=https://<vCenter Server address>/sdk -VcUsername=<Domain\User of vCenter Server account name> -VcPassword=<vCenter Server account password> -InventoryPath=/<Datacenter name>/vm/VMwareViewComposerReplicaFolder/<Replica Name> -Recursive=true

Notes: The sviconfig command parameters are case sensitive.

Caution: In View Composer 2.0, if a replica folder is unprotected, it cannot be protected again. Use the UnprotectEntity command as a last-resort troubleshooting procedure and exercise caution when running this command.

Running this second command on my vSphere 5.5/Horizon View 5.2 environment successfully unprotected the 4 replicas> Next I could delete the replicas from disk in vCenter.

Unprotect Replica.jpg
For more information visit:

VMware acquires AirWatch

Airwatch.jpgVMware and AirWatch just announced that they have signed a definitive agreement under which VMware will acquire AirWatch. Airwatch is a leading provider of enterprise mobile management and security solutions.

VMware will acquire AirWatch for approximately $1.175B in cash and approximately $365M of installment payments and assumed unvested equity.

AirWatch is a leading provider of enterprise solutions for Mobile Device Management, Mobile Application Management and Mobile Content Management with 1.600 employees,currently has more than 10,000 customers globally. AirWatch products offer enterprises a platform to securely manage a rapidly growing set of mobile devices and an increasingly mobile workforce. The vision of AirWatch is to provide a secure virtual workspace that allows end users to work at the speed of life.

This acquisition will expand VMware’s End-User Computing group, in which AirWatch’s offerings will form an expanded portfolio of mobile solutions that are complementary to VMware’s portfolio. VMware will probably integrate the AirWatch portfolio into its End User Computing (EUC) platform, VMware Horizon Suite, to further enable mobile users without compromising security.

Check out AirWatch and their wide range of solutions here.

Top 10 articles of 2013

Top10.jpgFor VMGuru 2013 was a great year in which we wrote 104 blog posts and introduced our new, more responsive and bandwidth-friendly website-layout.

Due to this served 2.2M pages to 426.338 visitors, using 346,5GB bandwidth last year.

But which are the most popular blog posts from 2013? We created a 2013 Top 10!

  • No 1. – How to improve VMware View video performance.
    In 2013 we did two articles on how to improve video performance with VMware Horizon View. Probably due to the increased use of VDI and VMware Horizon View this is the best read article-series of 2013. Part 1 explains how to influence video performance by manipulating VMware View Group Policy Objects (GPO). In part 2 Edwin added a section to improve Internet Explorer performance in VMware Horizon View environments.

    How to improve VMware View video performance.
    How to improve VMware View video performance – Part 2

  • No 2. – Bye bye Citrix XenServer.
    The second best blog post is one on Citrix XenServer. In October of 2013 when I was updating our Enterprise Hypervisor Comparison, I noticed that Citrix had removed a ton of features in the new Citrix XenServer 6.2. This looked like the end of Citrix XenServer, of course looking at the comments Citrix-enthousiasts don’t agree but check out the list of withdrawn features and do your own math.

    Bye bye Citrix XenServer
    .
  • No 3. – How does VMware vSphere 5.5 compare to the competition?
    With the release of VMware vSphere 5.5 I updated our ever popular Enterprise Hypervisor Comparison to include the new features and enhancements. VMware again raised the bar on enterprise grade hypervisors, but how does it compare to Microsoft Server 2012 Hyper-V, Citrix XenServer 6.2 or RedHat RHEV 3.2? Check out the blog post from August 2013.

    How does VMware vSphere 5.5 compare to the competition?

  • No 4. – vSphere 5 memory management explained.
    During my everyday work I was amazed how VMware memory management is still a topic which a lot of VMware administrators don’t understand. Administrator of big VMware environments who don’t have a clue what Transparent Page Sharing (TPS), memory compression, host swapping or ballooning is or what it does and when it is used. Also a lot of VMware administrators have trouble explaining the virtual machine memory allocation graphs. So I wrote a blog post in which I explain the different memory management techniques in VMware vSphere 5 which ended up number 4 on the 2013 top 10 list.

    vSphere 5 memory management explained (part 1)
    .
    vSphere 5 memory management explained (part 2).
  • No 5. – How to: Shutdown ESXi host in case of a power failure.
    This number 5 blog post was also inspired by my every day work as a Solution Architect. I got this question from a colleague, “A customer of ours is running a virtual infrastructure based on VMware vSphere using multiple techniques to create a high available environment. Clustering, VMware HA and FT but when the power fails this all doesn’t help. The customer has an UPS in place, just enough to start a standby generator or just wait until the power returns. But what if this takes too long? Can when automatically power down the entire virtual infrastructure?” I know it can be done but it annoyed me that I couldn’t give him a standard working solution so in the evening I started Googling and testing in my lab and wrote this blog post.

    How to: Shutdown ESXi host in case of a power failure.

  • No 6. – Look at the Horizon! VMware’s Horizon Suite is finally here.
    This is a blog post by Alex written at the exact moment VMware released their Horizon Suite. It explains what the Horizon Suite consists of, which version there are, how it all works together and which improvements have been added.

    Look at the Horizon! VMware’s Horizon Suite is finally here.

  • No 7. – HP finally killed the EVA.
    This has proven to be a controversial article as people totally missed the point I was trying to make. So I expected this to be much higher on the 2013 top 10 list. The point I was trying to make was that everyone suspected HP to replace the EVA with 3PAR but nobody within HP ever wanted to confirm that. I was disappointed that HP did not inform its partners by not being open on the EVA/3PAR-replacement. So I celebrated the moment when HP was finally clear on the EOL of the EVA. Besides that I wanted to point out that HP was once a leader in the storage market but due to their drifting storage portfolio, like with the EVA/3PAR, many customers left. They could have profited from their huge install base but they dropped the ball. HP is not the storage solution provider it once was and now they’re also taking a beating a the server market, mainly by Cisco UCS. They still hold a huge market share on the server market but I’m wondering how HP will do business in 5 years time.
    Check out the blog post and comments below.

    HP finally killed the EVA.

  • No 9. – How to license Windows 8 in a VMware Horizon View deployment.Licensing has always been one of Edwin’s specialties. He already wrote several blog posts on licensing Oracle, Microsoft SQL Server 2012 and Windows 7  and with the support for Windows 8 in VMware Horizon View he added a blog post explaining the do’s and don’t s of licensing Windows 8 as a VDI operating system in VMware Horizon View.

    Wondering which version of Windows 8 to use? Get VDA through SA or VDA subscription? How about roaming use rights? Windows 8 downgrade rights? Check out Edwin’s blog post on how to license Windows 8 in a VMware Horizon View environment.

    How to license Windows 8 in a VMware Horizon View deployment.

  • No 10. – What’s new in VMware Horizon View 5.3.
    With the release of VMware Horizon View 5.3 at VMworld Europe 2013 in Barcelona, we were one of the first to present a detailed list of new features and improvements. As VDI seems very popular in 2013, this number 10 on the list does not surprise me. Check out the impressive list of new features and enhancements below.

    What’s new in VMware Horizon View 5.3.

Happy new year!

We finished 2013 on a high with our new website-layout and a increasing number of visitors.

In 2013 we served 2.2M pages to 426.338 visitors, using 346,5GB bandwidth.

In comparison:
in 2012 we served 1.4M pages to 371.669 visitors, using 632.4GB bandwidth.
in 2011 we served 1.3M pages to 391.567 visitors, using 556.3GB bandwidth.

This huge increase is a confirmation for us that we’re on the right path, thank you very much for your support in 2013!

So from all of us at VMGuru,

We wish you a very happy & healthy new year and we hope 2014 brings you all the good things you hope for virtual or non-virtual!

Cisco released UCS Manager 2.2

Cisco UCS.jpeg

Last week Cisco released an early Christmas present, Cisco UCS Manager (code name: El Capitan), which includes a ton of new features.

For those of you who don’t know Cisco UCS Manager (UCSM), it provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System (UCS) across multiple chassis, rack servers, and thousands of virtual machines. Cisco UCS Manager manages Cisco UCS as a single entity through an intuitive GUI, a command-line interface (CLI), or an XML API for comprehensive access to all Cisco UCS Manager functions.

This new release includes a ton of new features but the ones I really like are:

  • Direct Connect C-Series to FI without FEX
    • Support direct connections of C-Series rack servers to the Fabric Interconnect without having to invest in a 2232PP FEX
    • Supported for the following rack servers connected with Single Wire Management and Cisco VIC 1225 adapter: C260 M2, C460 M2, C22 M3, C24 M3, C220 M3, C240 M3, C420 M3;
  • Direct KVM Access
    • Direct KVM access launches KVM via URL: http://<IP_address of CIMC> or https://<IP_address of CIMC;
    • System admins allow server admins to access the KVM console without requiring the UCSM IP address;
    • The CIMC IP URLs are hosted on the Fabric Interconnect;
    • Supported over out-of-band only;
  • Enhanced Local Storage Monitoring
    • Enhance monitoring capabilities for local storage, providing more granular status of RAID controllers and physical/logical drive configurations and settings
    • New Out-of-Band communication channel developed between CIMC and the RAID Controller allows for near real-time monitoring of local storage without the need for host-based utilities or additional server reboot/re-acknowledgement
    • Support monitoring the progress and state of long-running operations (e.g. RAID Rebuild, Consistency Check)
  • FlexFlash (Local SD card) Support
    • UCSM provides inventory and monitoring of the FlexFlash controller and SD cards
    • Local Disk Policy contains settings to enable ‘FlexFlash RAID Reporting’
    • Number of FlexFlash SD cards is added as a qualifier for server pools
  • Flash Adapters & HDD Firmware Management
    • UCSM Firmware bundles now contain Flash Adapter firmware and Local Disks firmware.
    • UCSM Host Firmware Policies can now designate desired firmware versions for Flash Adapters and Local Disks

These features really help in minimizing VDI solution stacks because you no longer need separate FEX, like Nexus 2232, to connect rack servers to the Fabric Interconnects to manage alle UCS servers, rack or blad, with one management platform. Besides that, you can now manage local storage which you regularly need with high end VDI solutions. The direct KVM Access is ideal for shared compute environments in which you now can offer customers direct KVM access without giving them direct access to your entire management network.

Besides this, Cisco UCS Manager 2.2(1) includes the following enhancements:

Fabric Enhancements:

  • Fabric scaling
    • El Capitan supports new underlying NxOS switch code, which enables UCS to increase the scale numbers on the 6200 Fabric Interconnects, supporting up to 2000 VLANs, 2750 VIFs, 4000 IGMP Groups, 240 vHBAs, and 240 Network Adapter Endpoints.
  • IPv6 Management Support
    • Allow management of UCS Manager and UCS servers using IPv6 addresses
    • Allow access to external services (e.g. NTP, DNS) over IPv6
    • External facing client applications (e.g. scp, ftp, tftp) and external facing services (e.g. sshd, httpd, snmpd) are now accessible over IPv6 addresses
  • Uni-Directional Link Detection (UDLD) Support
    • Uni-Directional Link Detection (UDLD) is Cisco’s data link layer protocol that detects and optionally disables broken bidirectional links
    • Supported in FI End-Host and Switching mode
    • A global policy and per-port policy are added to configure UDLD parameters including: mode, msg interval, admin state, recovery action
  • User Space NIC (usNIC) for Low Latency
    • UCS will support High Performance Computing (HPC) applications through a common low-latency technology based on the usNIC capability of the Cisco VICs
    • usNIC allows latency sensitive MPI applications running on bare-metal host OSes to bypass the kernel
    • Supported for Sereno-based adapters only (VIC 1240, VIC 1280, VIC 1225)
  • Support for Virtual Machine Queue (VMQ)
    • Enables support for MS Windows VMQs on the Cisco VIC adapter
    • Allows a network adapter to dedicate a transmit and receive queue pair to a Hyper-V VM NIC
    • Improves network throughput by distributing processing of network traffic for multiple VMs among multiple CPUs
    • Reduces CPU utilization by offloading receive packet filtering to the network adapter

 

Operational Enhancements:

  • Two-factor Authentication for UCS Manager Logins
    • Support for strengthened UCSM authentication, requiring a generated token along with username/password to authenticate UCSM or KVM logins
    • UCSM uses single authentication request which combines (token and password) in the password field of the authentication request
  • VM-FEX for Hyper-V Management with Microsoft SCVMM
    • UCSM will support full integration with SCVMM for VM-FEX configuration
    • A Cisco provider plugin is installed in SCVMM, fetches all network definitions from UCSM and periodically polls for configuration updates
    • Supported for SCVMM 2012 SP1, Windows Hyper-V 2012 SP1 & Windows Server 2012
  • CIMC In-band Management
    • CIMC management traffic takes the same path as data traffic via the FI uplink ports
    • Separate CIMC management traffic from UCSM management traffic increases bandwidth for FI management port
    • Support In-band CIMC access over IPv4/IPv6 (IPv6 access not supported Out-of-band due to NAT limitations)
  • Server Firmware Auto Sync
    • Server Firmware gets automatically synchronized and updated to version configured in ‘Default Host Firmware Package’
    • Global policy allows user to configure options:
      • Auto Acknowledge (default)
      • User Acknowledge
      • No Action (feature turned off)
    • Guarantee server firmware consistency and compatibility when adding a new or RMA’ed server to a UCS domain

 

Compute Enhancements:

  • Secure Boot
    • Establish a chain of trust on the secure boot enabled platform to protect it from executing unauthorized BIOS images
    • Secure Boot utilizes the UEFI BIOS to authenticate UEFI images before executing them
    • Standard implementation based on the Trusted Computing Group (TCG) UEFI 2.3.1 specification
  • Precision Boot Order Control
    • Support creating UCSM Boot Policies with multiple instances of Boot Devices (FlexFlash, Local LUN, USB, Local/Remote vMedia, LAN, SAN, and iSCSI)
    • Provides precision and full control over the actual boot order for all devices in the system:
      • Multiple Local Boot Devices (RAID LUN/SD Card/Internal USB/External USB) and SAN
      • Local & Remote vMedia devices
      • PXE/SAN boot in multipath environments
  • Trusted Platform Module (TPM) Inventory
    • Allow access to the inventory and state of the TPM module from UCSM (without having to access the BIOS via KVM)
  • DIMM Blacklisting and Correctable Error Reporting
    • Improved accuracy at identifying “Degraded” DIMMs
    • DIMM Blacklisting will forcefully map-out a DIMM that hits an uncorrectable error during host CPU execution
    • Opt-in feature enabled through an optional Global Policy (Disabled by default)

 

The El Capitan features enable several UCS Solutions including:

  • VM-FEX with SCVMM for MS Private Cloud
  • Direct Connect C-Series for Smaller Big Data Clusters
  • Direct Connect C-Series for Smaller VDI Deployments
  • Direct Connect C-Series for FlexPod Reference Architecture with ESX 5.5
  • Enhanced Local Storage Monitoring for Improved System Management Integration and SMB VDI Solutions
  • PCIe Flash Cards Support for Non-Persistent VDI
  • usNIC-based HPC Solutions on Cisco UCS B-Series
  • Ubuntu Support for OpenStack

 

Links to download this release are as follows:

  • Infrastructure software bundle: Click here to download
  • B-series and C-series software bundles for this release are available at the above link, under “Related Software”.
  • UCS Platform Emulator 2.2(1b):  Click here to download
    • NOTE:  From UCS PE 2.2(1bPE1) onwards, UCS PE supports uploading the B-Series and C-Series server firmware bundles.  Because of the large file sizes of the firmware bundles, UCS PE only supports uploading of only the stripped-down versions (attached to this document), which includes only the firmware metadata but not the actual firmware itself in the binaries.  The stripped-down version of the firmware bundles which contain metadata only of the B-series and C-series server firmware is reduced to approximately 50 kB in size.

VMware Fling – Real-time audio/video test

VMware Labs has released a great new fling, an application with which you can verify and test the real-time audio/video performance. The application includes a player that displays the ‘virtual webcam’ feed, and also loops back the audio if required.

This allows for testing without a third party app (which often requires user accounts such as Skype, WebEx, etc.). The application can also perform load testing by forcing the video and audio stream to continuously run again, without a third party app dropping the call after a period of time.

Features:

  • Displays webcam images at 1:1 resolution
  • Automatically starts streaming images when launched (and audio will be looped back if selected)
  • Ability to loop the audio-in back to audio-out
  • No need to create user accounts to see RTAV
  • Supports the VMware Virtual Webcam and Physical Webcams

Here you can download the real-time audio/video test application.

VMware Horizon View 5.3 is available

At VMworld 2013 in Barcelona VMware announced the new version of their EUC product Horizon View 5.3.

Now it is finally available for download!

VMware Horizon View 5.3 includes a significant number of new or improved features.

  • Direct Pass-through Graphics
    Virtual Dedicated Graphics Acceleration (vDGA) is a graphics acceleration capability that is offered by VMware with NVIDIA GPUs and this is now supported by Horizon View 5.3. This enables customers to deliver high-end 3D-grade graphics for use cases where a discrete GPU is needed. vDGA graphics adapters can be installed in the underlying vSphere host and are then assigned to virtual desktops. Assigning a discrete NVIDIA GPU to the virtual Machine dedicates the entire GPU to that desktop and includes support for CUDA and OpenGL.
  • Windows 8.1 Support
    My experience with Windows 8.1 is not that positive but VMware already included full support in Horizon View 5.3. This comes aligned with the Windows 8.1 client support in vSphere 5.5. Important: Local Mode and View Persona Management features are not supported with Windows 8.1 desktops yet.
  • Multi Media Redirection (MMR) for H264 encoded media files to Windows 7 clients
    VMware added support for multimedia redirection of H264 encoded Windows Media files to Windows 7 client end-points. H.264/MPEG-4 is currently one of the most commonly used formats for the recording, compression, and distribution of high-definition video. When using this Windows 7 endpoints will receive the original compressed multimedia stream from the server and decode it locally for display. This can decrease bandwidth usage since the data over the wire will be compressed video instead of a uncompressed screen information and it also decreases used server resources, because the server no longer use server CPU resources decoding the video content.
  • HTML5 access improvements
    With Horizon View 5.2 it was possible to use a VDI desktop without installing client software by using delivered through HTML5 capable web-browsers. With Horizon View 5.3 VMware has further improved this feature so users can now enjoy sound, clipboard access and a improved graphics performance.
  • Real-time audio-video (webcam/audio redirection) for Linux clients
    With Horizon View 5.3 VMware introduces real-time audio and video support for Linux clients (support for Windows client was already in 5.2). Real-time audio and video does not forward audio and webcam devices using USB. Instead the devices are controlled by the local client, and audio- and video-streams are transferred from the local devices and encoded, delivered back to the guest virtual machine, and decoded.
    Audio delivery is performed from the standard View agent audio-out functionality, which provides better audio quality than with USB redirection.
  • iOS 7 look & feel for iPhone/iPad client
    The iOS client now matches the look and feel of iOS 7, released at the beginning of October.
  • USB 3.0 port support
    Horizon View 5.3 offers USB port redirection support for USB 3.0 client ports.
  • Support for Windows Server 2008 VM based desktops
    Strange but true, Windows Server 2008 R2 is now supported as desktop operating system. Why? Well Microsoft does not offer SPLA licensing for Windows desktop operating systems to allow service providers to create Desktop-as-a-Service (DaaS) offerings using VMware Horizon View.
    Microsoft does offer SPLA licensing for Windows Server 2008, so this allows service providers to be fully compatible with the Microsoft licensing terms.
    Important to know is that some features are currently not supported with Windows Server 2008 R2, check the release notes.
  • Support for VMware Horizon Mirage
    This is the first step in creating a single desktop image delivery system. Administrators can now utilize VMware Horizon Mirage 4.3 to manage Horizon View virtual desktops. Mirage keeps a centralized and de-duplicated copy of virtual desktops, including user’s applications and data, and is able to re-instantiate them should you have a host or site failure. Mirage can also distribute individual and departmental application layers. With Horizon Mirage IT is effectively able to eliminate the need for complex namespace or application virtualization solutions.
  • VCAI production ready
    View Composer Array Integration is now a fully supported feature. VCAI allows administrators to take advantage of native storage snapshot features. VCAI integrate with NAS storage partner’s native cloning capabilities using vSphere vStorage APIs for Array Integration (VAAI). VCAI speeds up provisioning of virtual desktops while offloads CPU consumption and network bandwidth.
  • Linked-Clone Desktop Pool Storage Overcommit enhancements
    The linked-clone desktop pool storage overcommit feature includes a new storage overcommit level called Unbounded. When selected, View Manager does not limit the number of linked-clone desktops that it creates based on the physical capacity of the datastore.
    Important: note that the unbound policy should only be selected if you are certain that the datastore in use has enough storage capacity to accommodate future growth.
  • Supportability improvements for View Persona Management
    With Horizon View 5.3 View Persona Management feature includes several supportability improvements, including additional log messages, profile size and file and folder count tracking, and a new group policy setting called Add the Administrators group to redirected folders. View Manager uses the file and folder counts to suggest folders for folder redirection.
  • Oracle 11.2.0.3 database support
    In addition to the supported databases listed in the installation documentation, VMware Horizon View 5.3 supports Oracle 11.2.0.3 databases.
  • vSAN for VMware Horizon View
    As of version 5.3 VMware includes vSAN for Horizon View desktops in the Horizon Suite. vSAN reduces storage cost for VDI deployments by using inexpensive server disks for shared storage. It also can improve performance because vSAN uses SSD caching for read and write  and provides intelligent data placement within a vSphere cluster. vSAN is a scale-out converged platform and a hybrid storage solution combining SSD and traditional disks. Because it fully integrates with the vSphere kernel it has very low latency.
    Because VSAN is in beta release, this feature is being released as a Tech Preview, which means that it is available for you to try, but it is not recommended for production use and no technical support is provided.

 

You can download VMware Horizon view 5.3 here!

How to: Install VMware NSX

Hany Michael from Hypervizor.com, has made series of videos showing the installation ease of VMware NSX. Unfortunately NSX is not GA yet, but in the videos you can see how the installation goes. Check these out:

Deploying the NSX vAppliance

Deploying the NSX Controllers

Preparing ESXi hosts

Configuring a Logical vSwitch

VMware NSX Distributed Services

This article is number two of a series about the upcoming network virtualization spree, specifically the one coming from VMware. Check out the first article in this series, ‘Introduction to VMware NSX.

Traditional network services have evolved over the last years. Introducing more advanced firewalling, loadbalancing and remote access services. Typically, datacenter networks architecture these days look somewhat look this:

VMware-Traditional-Services-300x300.png

The routers can be virtualized inside a physical box, using either VRFs or vendor proprietary router virtual routers, such as Cisco VDC. However, the external and internal firewalls are usually separate monolithic hardware firewalls, which puts a large dent into the network budget.

As we move to a virtual-everything world, desktops and applications are hosted inside the datacenter more and more. The data traffic going east-west inside the datacenter is continuing to grow and is causing scalability issues on the central network services devices. Firewalls and load balancers need to be upgraded (in-place) to keep up and are bleeding the network budget.

With VMware NSX, the physical load balancers and internal firewalls will turn virtual. This will increase the scalability of your internal services enormously; every VM will have it’s own firewall instance (embedded in the ESXi kernel) and you’ll have a load balancer service per application. Here’s how the next step in virtualization will look like:

VMware-Distributed-Services.png

The possibilities are limitless. There will be a world where you can build a datacenter network with a single pair of proper core switches, standard switches and the rest will be purely x86 servers. Here’s how I think the datacenter network will look in a few years when virtualization has really kicked in:

VMware-Virtual-Networking-Endstage.png

Check out these great vendors making some awesome announcements about NSX integration:

 

paloalto-150x105.png juniper.gif f5-logo.png
catbird_logo.png Fortinet_Logo_PMS485-300x34.png logo-mcafee.png

 

There’s still a lot of ground to cover on NSX and you will find a lot of information here as I love this technology and love the possibilities it gives when designing datacenter architectures.

One thing that has set me off a little bit, is the fact that VMware is keeping NSX closely to their chest. Evaluations are currently not on the table and integration partners are excluded from implementation tracks and there is no way to get a hold of NSX but through VMware’s Professional Services. Maybe it’s the difficulty implementing NSX, maybe it’s VMware not being ready with NSX but feeling compelled to put it out at an early stage, who knows. All I know it’s very disappoint for those of us who want to turn NSX inside and out.

They say partners will start getting in the loop around Q3 2014, but I wish they’d move that timetable up a few quarters.

 


This article was written by Martijn Smit, Datacenter engineer at Imtech ICT. This article was republished from his blog with his permission

Also check out Martijn’s website Lostdomain.org.

 

Introduction to VMware NSX

This article is number one of a series about the upcoming network virtualization spree, specifically the one coming from VMware.

I spent 14 to 17 October at VMworld 2013 in Barcelona, basically getting my mind blown by the futuristic possibilities of network flexibility. Things are changing for the network, flattening the entire stack, distributing network services throughout the virtual network (instead of the monolithic central hardware), lowering network costs and making it more flexible and simple to manage.

In this post, I will go over the basics of the components that are used to form the VMware NSX virtual network.

  • NSX Manager (management-plane);
  • NSX Controller (control-plane);
  • NSX Hypervisor Switches (data-plane);
  • NSX Gateways;
  • Distributed Network Services.

VMware NSX.PNG

NSX Manager

Configuring the NSX virtual network mostly goes through APIs. The idea is that cloud automation platforms (i.e. vCenter Automation Center) or self-developed platforms will leverage NSX to automate deployment of virtual networks.

The NSX Manager produces a web-based GUI for user-friendly management of the NSX virtual network. This GUI can be used next to your cloud automation platform for manual configuration and troubleshooting. You can view the status of the entire virtual network, take snapshots of the virtual network for backup, restores and archival.

Everything the NSX Manager does to manage the virtual network, goes through API calls towards the NSX Controllers.

NSX Controller

The NSX Controller is a very scalable control layer that takes on the functionality of the network control-plane. It is responsible for programming the Hypervisor vSwitches and Gateways with the configurations and real-time forwarding state. Whenever there’s a change in the virtual network (a VM boots, change of portgroup), the controller programs the virtual network to understand these changes.

The NSX Controller cluster typically consists of three NSX Controllers, but when those three are not enough (and can’t keep up with the workloads), up scaling is as easy as deploying a new NSX Controller virtual appliance and adding it to the NSX Cluster.

The Hypervisor vSwitches are divided between the NSX Controllers. The responsibility for a vSwitch is done through an election process, where 1 NSX Controller wins the master role and another NSX Controller wins the slave role. The other NSX Controllers within the cluster can be called upon the master for assistance in the workloads. The slave monitors the master and takes over if the master fails.

Hypervisor vSwitches

Virtualization today already has had vSwitches from the beginning. How else would virtual machines connect (in a scalable fashion) to the network to provide services?

Each hypervisor has a built-in, high performance and programmable virtual switch inside. In the NSX virtual network, the NSX Controllers programs these vSwitches with the current state of the network (configuration and forwarding state). If a NSX network is distributed (VMs in the same network spanned over different hosts), the controllers program the vSwitches to set up IP encapsulation tunnels (STT or VXLAN) between these hosts to extend the virtual network.

NSX Gateways / Edge devices

An NSX Gateway is basically the border or edge of the virtual network. It is where the virtual network communicates with the physical network that we see today. A NSX Gateway can be a virtual appliance linking traffic to VLANs, but it can also be a physical device by some vendors.

Here’s a small list of the top vendors:

  • Arista (7150S);
  • Brocade (VCS Fabric: VDX 6740 and 6740T);
  • Juniper (EX9200 & MX-series);
  • Dell (S6000-series);
  • HP (announced something, no details).

To my (and many others with me) disappointment, Cisco is absent from this list. They have a ‘different view’ and going for their own thing (Cisco ONE), which is discussed here. I hope they come to their senses and allow certain types of network switches to be part of a NSX network. (Perhaps the Nexus 5ks!?)

Distributed Network Services

The best part about the distributed network services functionality is the services registry. This service registry makes plugins possible. So far, I’ve heard great stories from Palo Alto and TrendMicro. Those of you not familiar with any of these products (be it that Palo Alto mostly does insanely great physical firewalls), should gather some info. More on distributed network services at a later date!

Introductory video

Check out this awesome introductory video on NSX.

 

Next article in this series, VMware NSX Distributed Services.

 


This article was written by Martijn Smit, Datacenter engineer at Imtech ICT. This article was republished from his blog with his permission

Also check out Martijn’s website Lostdomain.org.