What server hardware to choose for your virtualization project?
A question you should definitely ask yourself is are we going to scale up or to scale out or do both? With scaling up you will put more CPU power, Memory and network bandwidth in your physical servers to accommodate more virtual servers on it. With scaling out you just put another physical server box into the datacenter racks with the same dimensions as the other physical servers.
The pros and cons for scaling out versus scaling up.
Pros for scaling out are that you just buy another box and spread the working load across one more box. You can have up to 32 physical servers in a VMware DRS cluster. The more servers there are in a VMware cluster the more efficient DRS will work. Also the overhead for High Availability, n+1 physical servers, is a pro for using more servers.
Cons for scaling out is that you will need to buy more licenses, buy more hardware support, use more rack space in the datacenter and you will have to go through the ordering a physical server workflow which will take time.
What server types will you use for scaling out, three of the most used servers are:
Pros for scaling up are that the price per vServer will drop, you don’t have to use more datacenter space, hardware support costs won’t rise and if you don’t upgrade the server with CPU power license costs will stay the same. Also you have less hardware to manage in your datacenter.
Cons for scaling up are that if you run more and more virtual servers on a physical server and you want to perform maintenance and put the box in maintenance mode you don’t want to wait till 80 or more virtual servers are moved off. Same counts for when a physical server dies and High Availability kicks in you want those virtual servers back at work asap.
What server types will you use for scaling up, three of the most used servers are:
Bottom line is to try and find the sweet spot where costs meet functionality, often I use servers with 2 Quad core CPU and 32GB memory (slowly moving to 64GB now prices for memory are dropping). Why not use 1U rack servers you ask? Ofcourse you can do that, but bear in mind that you have to fit NICs and HBAs in the little box. With the standard setup we often chosse 1U high servers just don’t have enough space to hold 2 or 3 NICs with quad ports on them and also fit 2 HBA’s in them if you use a Fiber Channel SAN.
You are wondering where the blade servers are in this comparison, I will blog about the choice between normal rack servers versus blade servers in the upcoming days.
Edit: I have added the AMD server types to the links.