How to: Install VMware NSX

Hany Michael from Hypervizor.com, has made series of videos showing the installation ease of VMware NSX. Unfortunately NSX is not GA yet, but in the videos you can see how the installation goes. Check these out:

Deploying the NSX vAppliance

Deploying the NSX Controllers

Preparing ESXi hosts

Configuring a Logical vSwitch

Review: Synology Diskstation DS1513+ with VMware – Part 2

SN_Resource_Monitor.pngIn part 1 we finished the hardware installation of the Synology and setup of the DSM software. In this part we will hook the Synology to the VMware vSphere 5.1 Lab environment. The Lab consist of a laptop with 32GB and 2 Quad core E5 Intel CPUs, VMware workstation 8 which runs two virtual VMware vSphere 5.1 ESXi servers and 1 vCenter Server.

Also several other supporting VMs like Windows Server 2012, Windows Server 2008R2 and some clients running Windows XP, 7 and 8 are present. LAB_Setup_vESX_Servers.png

The Synology DS1513+ can run as a iSCSI target and/or can give out NFS shares. Both can be connected and added to your virtual environment. The Synology is connected through two 1Gbps links with an Apple Airport Extreme with ac availablity. We connect the laptop through wireless where I use an Macbook Air 2013 for support. For the link aggregation test we will use an Cisco WS-3750G switch, because the Airport Extreme doesnt support link aggregation so far I could discover. For connecting a Synology to a VMware environment always use minimal 1Gbps network speed. On the Synology DS1513+ you can combine the 4 LAN ports to a 4x1Gbps channel. Lets setup the Synology for iSCSI now.

iSCSI Setup on the Synology DS1513+

SN_iSCSI_Target_P1.pngWith the Synology you have two options of iSCSI, you may choose block-level or file-level. Where Block-level operates the closest to the RAID level and there for offers greater performance compared with File-level iSCSI, Using a Block-level iSCSI will create a target whose capacity will be equivalent to the size of the RAID Volume.

For greater flexibility use File-level iSCSI where the RAID Volume storage can be shared between regular file sharing duties, and virtual storage space. Since we are going to use the Synology with a VMware ESXi which supports targets greater than 2TB we will choose Block-level iSCSI.

Start the Storage Manager in the DSM web interface on the Synology and choose iSCSI Target and the button Create.

SN_iSCSI_Target_P2.pngSN_iSCSI_Target_P3.pngSN_iSCSI_Target_P4.png

Insert a name and watch the iSCSI Qualified Name (IQN) you will need that one later on when setting up the ESXi server. Create a new iSCSI LUN and choose for a iSCSI LUN Block-level if you are going to use it, like I am, for your virtual environment. An IQN is structured like: iqn.yyyy-mm.domain:device.ID

SN_iSCSI_Target_P5.pngSN_iSCSI_Target_P6.pngSN_iSCSI_Target_P7.png

You will need to create a Disk Group or select a Disk Group if you already created one. Select all disks you want in the disk group and choose the correct redundant RAID level, so you have at least 1-disk redundancy against disk failure. Allocate volume capacity to the Disk Group, remember you can multiple LUNs on it so select all you have in the Disk Group. An summary will be shown for the new iSCSI Target we are creating.

SN_iSCSI_Target_P8.pngSN_iSCSI_Target_P9.pngSN_iSCSI_Target_P10.png

The iSCSI Target will be created and will appear with a status Ready when correctly configured.

SN_iSCSI_Target_P11.pngSN_iSCSI_Target_P12.pngSN_iSCSI_Target_P13.png

SN_iSCSI_Target_P14.png

 

On the iSCSI Target tab click the Edit button for making changes to allow multiple sessions from one or more iSCSI initiators to the Synology. This is possible because the VMFS file system is a cluster aware file system which handles locking and such.

Through the iSCSI LUN tab we created a 1TB LUN on the Disk Group so that a datastore can be created on there through ESXi with VMFS5.

Now the Synology part is finished, lets continue with connecting the ESXi servers as iSCSI initiators to the newly created iSCSI target just created on the Synology.

 

iSCSI Setup on VMware vSphere 5.1

ESX_iSCSI_NIC_P01.png

The Lab environment consists of two ESXi 5.1 servers and a vCenter server for  management and ofcourse all high availability options like HA, DRS, vMotion and FT. For using iSCSI in VMware you will need to make a VMkernel port which handles iSCSI traffic.

Select the ESXi server you want to configure in vCenter server and on the ESXi server go to the Configuration tab and select Networking. On the right you will see a button called Add Networking… Press that one. A wizard will open and will ask if you want to make a network connection for Virtual Machines or to Add a VMkernel port which handles traffic for e.g. iSCSI.

ESX_iSCSI_NIC_P02.pngESX_iSCSI_NIC_P03.pngESX_iSCSI_NIC_P04.png

We will be creating a new standard vSphere vswitch where we will add the physical vmnic1 to support the iSCSI traffic. Give the port group on the vswitch a network label you will recognize easily. You are all set now click Finish to create the needed vswitch with a VMkernel port to handle iSCSI traffic over vmnic1. A vSwitch1 is now created.

ESX_iSCSI_NIC_P05.pngESX_iSCSI_NIC_P06.png

 

ESX_iSCSI_Activating_P01.pngWe will activate iSCSI on the ESXi servers and connect them to the Synology DS1513+. Go to the configuration tab and select Storage Adapters.

Choose the iSCSI Software Adapter vmhba33 and press the Properties button on the right at the middle of the screen. On the Dynamic Discovery tab > Press Add > Configure the IP-Address of the Synology in our case 10.0.1.13 on port 3260.

If dynamic discovery doesn’t work you can add a static discovery but you will need the IQN you wrote down earlier before when setting up the Synology iSCSI Target. After you are finished with the settings do a rescan of the host bus adapter to find the new storage connected to the ESX servers.

ESX_iSCSI_Activating_P02.pngESX_iSCSI_Activating_P03.pngESX_iSCSI_Activating_P05.png

Multipath
If you have 2 or more network interfaces on your Synology than your Synology supports multipath on the iSCSI Target so you can build and deploy fail-over and load balancing solutions. Especially in combination with VMware vSphere and vCenter Server you can build a great redundant and high performing solution.

ESX_iSCSI_MP_P1.png

When you select the Configuration Tab and Storage Adapters you can highlight the Synology and press the right mouse button, so you have access to the options menu and select Manage Paths from there. We changed the settings of vSwitch0 and the VMkernel port also to handle iSCSI traffic, together with vSwitch1 we have multiple paths to the Synology DS1513+ now. You will see that 1 Nic is handling (I/O) and 1 Nic isnt. Select them both and choose Round Robin (VMware) to make them both active.

ESX_iSCSI_MP_P3.pngESX_iSCSI_MP_P5.png

ESX_iSCSI_MP_P6.png

All paths are active now and servicing iSCSI traffic in a load balanced fashion. In part 3 of this review we will create some datastores on iSCSI and connect the ESXi servers with NFS to the Synology DS1513+. When running some tests in the background we where moving data to the Synology while it also was playing a Full HD 1080p movie to a Samsung TV. All without spiking or dropping packages. The MacBook Air had a throughput of 800Mbps over the Wifi to the Synology, while the Lab environment was pushing and pulling VMDKs from several datastores. But more about that in the next parts of the review.

 

 

Which virtual switch to select in vSphere 5

There are a lot of choices to be made for networking in VMware vSphere 5. We’ve got the good ol’ vSwith, vSphere Distributed Switch (vDS) and finally the Nexus 1000V from Cisco. But what is the best one?

vSwitch (vSphere standard switch)

A vSphere standard switch is a virtual switch that can be created on a single host. Port groups defined on this vSwitch will be local to the host on which the port group is created. In other words: If you have multiple hosts you have make sure that the port groups are identical across all hosts, especially when you want to use vmotion. For VMotion the port group names on the source and target host have to be the same.

 

 

(more…)

Cisco UCS: What’s maximum number of VIFs per blade?

As one of the largest Cisco Partners in the Netherlands we do a lot of Cisco UCS implementations and as the first company in the Netherlands with the Cisco Advanced Data Center Architecture Specialization, where the place in the Netherlands for Cisco UCS troubleshooting. Last week a colleague was called to a troubleshoot a customer problem.

The customer was unable to create a 14th Virtual Network Interfaces on their Cisco UCS Virtual Interface Card and 13 interfaces is far from the maximum of 128 or 256 possible virtual interfaces per Cisco UCS VIC. Fortunately the solution appeared to be simple.

In a Cisco UCS environment all centralized intelligent occurs in the Fabric Interconnect. When using Cisco UCS Virtual Interface Cards (VICs) you can create Virtual Network Interfaces (VIFs) which can be presented to individual virtual machines. All of these virtual interfaces that are created show up in the Fabric Interconnects. They are called VIFs (Virtual Interfaces) and use VN-Tags.

The number of VIFs per blade is limited by the most restrictive item in the following list:

  • the network connectivity from chassis I/O Module (IOM) to Fabric Interconnect;
  • the Adapter VN-Tag namespace;
  • the OS/BIOS version.

(more…)

vSphere network troubleshooting

During the last month I have been very busy building a new infrastructure at a client site. I’m responsible for the overall technical solution and the basis, a VMware vSphere infrastructure build on five Dell PowerEdge R805′s, Dell EqualLogic PS5000 and 6000 storage and Cisco switches for LAN, DMZ and IP storage networking.

Just before the customer initiated their functional test period we discovered that the overall Windows network  performance was slow. We did several test like copying an 8 GB file from local vmdk to local vmdk and VM to VM and found that the storage performance was no issue but the network performance was very slow.

In the last few years that I have been working with virtualization I have always been a fan of a static network configuration. Meaning, when I configure ESX networking I like my network interfaces and physical switch ports to be configured at 1000MB full duplex if the switch/network interface combination allows it. The idea is that if you purchase gigabit network interfaces and switches you know the maximum speeds. So you configure it to run at it’s maximum capacity, eliminating overhead and using as much bandwidth as possible purely for data transfer.

So when we experienced slow network performance I had a colleague check the Cisco LAN switches for errors, drops, packet loss or any other flaw which might indicate a speed or duplex mismatch. None were found so I assumed that the network configuration was not the issue. But as we know by now, ‘Assumption is the mother of all fuck-ups!‘.

(more…)

Deciphering the Cisco 3750 product code

When designing a virtual infrastructure an important bit in the design is the storage infrastructure also called the Storage Area Network (SAN). In a SAN based on iSCSI we often use Cisco 3750 switches, but when you are going to select the right Cisco 3750 for the job the fun starts. You will be dazzled by the amount of different product numbers and will be busy deciphering the product code.

The product code for a Cisco 3750 switch is build up like this:

WS-C3750a-xxbc-dee

WS stands for Switch
C stands for Catalyst series
3750 stands for the 3750 product line

a >> blank, G, E
blank = classic 3750 switch, 6.5 or 13.1 mpps forwarding rate
G = all ports are gigabit, 35 or 38 mpps forwarding rate
E = enterprise line, 65.5 or 101.2 mpps forwarding rate

xx >> 12, 16, 24, 48
12 = 12 Ethernet ports
16 = 16 Ethernet ports
24 = 24 Ethernet ports
48 = 48 Ethernet ports

b >> T, P, F, D, W
T = Ethernet ports
P = Power over Ethernet

(more…)

Emulex releases 10Gb/s Ethernet and iSCSI adapter

EmulexToday Emulex announced the general availability of their new OneConnect Universal Converged Network Adapter (UCNA) Product Family, specifically the OCe10102-I 10Gb/s Ethernet iSCSI Adapter and OCe10102-N 10Gb/s Ethernet Adapter are very interesting.

The Emulex OneConnect UCNA is a single-chip, high performance 10Gb/s Ethernet adapter that provides connectivity for network and storage traffic over one multi-function server adapter. Unlike first generation CNAs that only provide FCoE convergence, the Emulex OneConnect UCNA technology provides optimized performance for TCP/IP, FCoE and iSCSI protocols.

The OneConnect UCNA comes in three different versions (N, I and F):

  • OCe10102-N: 10Gb Ethernet Adapter;
  • OCe10102-I: 10Gb iSCSI Adapter;
  • OCe10102-F: 10Gb FCoE  Adapter.

(more…)

Troubleshooting VM network connections

When I am troubleshooting I like to have a list of items I can check for either in my head or on paper. Amongst the knowledge base articles of VMware, I found an article about troubleshooting VMs that are having network connection issues.

The article is provides items you can check for when a VM is having connection issues. And with each item they are giving a link to other helpful articles.

(more…)

VMware View advanced networking

The last few months I have been busy designing, building and testing a new VMware View solution for our own Support Center. In this Support Center we do support and system administration for some of our biggest clients. One of the challenges is the use of desktop hardware and the limited space of a call agent’s or administrator’s desk. Many of my colleagues support  multiple client sites and need different PCs for each client. So in 2008 one of my respected colleagues thought of a great solution and advised to implement a VMware VDI solution.

The idea was to create a pool of virtual desktops for each client site and supply the call agents and system administrators with a standard physical desktop with which they can access one or more virtual desktops and do the standard office work (Word, Outlook, etc) at the same time. Saving space needed for all those desktops and minimizing heat, power, etc and improving the working conditions in the process.

We bought four DELL PowerEdge 2950 II’s with two quad core CPUs and 64GB of memory each and a DELL EqualLogic 5000E iSCSI SAN to build this all virtual VDI solution.

One of the biggest challenges was to separate all client networks, so we assigned VLANs to all of them. But this raised a new challenge as I discovered during the implementation. Because we assigned all client their own VLAN and there was no routing between them, how can we connect to the virtual desktops.

(more…)

Getting rid of my frustration ………

Most of the time when I’m frustrated it helps to play some squash, ride my bike or vent my frustration on this blog. As all walls in the area have been smashed to bits due to frustrations in the past and it is too cold to ride my bike, I will try the last one.

Yesterday I attended a seminar from work together with Edwin, Anne Jan and Alex. The subject was virtualization with Citrix and we were invited to keep a good balance between Citrix and VMware but we could not give a presentation due to time constraints. The people who organized this event know us pretty well and know (and share) our love for VMware so we were asked not to mess up the presentation and bash the Citrix guy. And so we did………

(more…)

VMWorld Europe 2009 – Partner day

We just enjoyed the General Session for Partner day 2009 and the message is ‘the virtualization business is still growing’, ‘2009 is all about client virtualization’ and ‘get up, register opportunities, close deal, dig deap’.

The keynote was presented to us by Paul Maritz, CEO of VMware. He stated that we can achieve change by two means, evolution and revolution and that evolution is the best as it tends to stay longer. VMware is the platform to achieve this evolution. He revealed a few public secrets, VMware vSphere, vShield zones and VMware vCenter Server Heartbeat. VMware vSphere is the new name for ESX but we knew, and blogged on that, already. VMware vShield zones a security initiative based on VMware Safe.  VMware vCenter Server Heartbeat is VMware’s new feature to create a high available vCenter Server, mostly for implementations where vCenter is still on a physical server.

(more…)

How to fit 100 people in a 9 square foot office?

The solution to that problem isn’t hard, just built virtualized workspaces. Last few weeks I have had several dreams how such a place could look like. Imagine a small 1 person office completely surrounded in glass with lots of plants in it like a green house. You would feel lonely in such a place ? Just project your team members in the glass surrounding you, so you can build your surrounding like you have arranged and rearranged your desk for the last couple of years. Just want video without audio no problem mute your colleagues when you have to make a phone call, so you don’t bother them and they don’t bother you at that moment.

This way you can work, with several colleagues, in mixed composition, in a small office, interacting with each other when needed. We don’t want our coffee to be virtual or do we? One machine that can give out any juice you need at that time? Sounds lovely to me.

Most of the techniques needed to build a virtual office is already at hand, so why don’t combine them?

(more…)

Creating portgroups with PowerShell

I must  be a workaholic. I was browsing my laptop for some movie and I came across a folder with all kinds of plugins for VirtualCenter. Things like addPortgroup and other stuff.

Although it’s very handy to use those plugins I like to be able to these kind of things from the commandline, so I started some PowerShell script (here is where the workaholic starts) to create portgroups on all your VMware hosts that are known by VirtualCenter.

(more…)

HA problem checklist

Last few weeks I have been very busy solving HA issues at a client site. As you may have read I solved the problems by swapping out the USB sticks and troubleshooting BIOS settings. Now my collegues asked me if I could write down all checks I performed (together with VMware support) to target these HA issues.

(more…)

VMware employee confirms DPM support in next release

Last night I was reading some virtualization blogs (vblogs ;-) ) and an article on boche.net caught my eye.
This article stated that a VMware employee, Richard Garsthagen,  revealed that after a very long period of waiting VMware decided that it’s going to support Distributed Power Management (DPM) in the next release.

[Crowd is going wild]

It’s not clear what this means exactly. The next release? Is this Update 4 or VI4?

(more…)

Cisco and VMware networking

Looking for information about iSCSI, Cisco and VMware I stumbled onto a document which I want to share with all of you. Want to know about vswitches, portgroups, security, ether channels, scalability, security, performance, VLAN tagging, N-port ID virtualization, iSCSI implementations?

In ‘VMware Infrastructure 3 in a Cisco Network Environmentall ins and out of VMware networking are described. 90 pages of pure networking wisdom.

Another valueable document I found is the VMware iSCSI Design Considerations and Deployment Guide. When you are designing and /or implementing an iSCSI solution this is a document you should read.