Hi all,
I'm beginning the process of upgrade from a Essentials Plus 4.1 to 5.5, also Essential Plus.
My question is not related directly to the upgrade issue, but more to ask a few advice's from a more experienced user. I'll try to explain my problem.
Our current datacenter is composed of three physical servers each with, among other hardware:
- 2 Physical CPU
- 32 GB RAM
- 6 Gbit/s NICs
We also have two switches to provide network redundancy between hosts
The current network layout, as far as ESX hosts are concerned, do not reside on VLAN implementation. This is the current network approach per hosts:
- Each two NIC are grouped together into 3 NIC groups
- Each NIC group is connected to two switches, for availability purposes
- There are basically 3 different physical networks
The problem with the current layout is that almost all communications are limited to 1Gbit/s.
Another problem is the scalability issue of not having VLANs.
I'm taking this upgrade window, not only to upgrade vSphere to version 5.5, but also to try to improve network performance and scalability.
One of the main objectives is to get more than 1Gbit/s on communications. Now what is my thinking on this:
- I will aggregate the NICs into one group of 4 NICs and another of 2 NICS. This will give me a total of 2Gbit/s on the Management Network and 4 Gbit/s on the remaining networks. QoS can also be applied, giving different layouts but, I'm just trying to keep it simple for you (the reader)
As far as I know there can be several approaches:
1) Use of Cisco Nexus 1000v: AFAIK, this is not an option, since it requires the Enterprise Plus version, which is not an option ;(
2) Creation of non LACP aggregation link between the multiple NICs and the two switches: The switches are HP 2510, which do not support inter switch trunking so, this cannot be done.
So, I'm left with a few, in my perspective two bad approaches:
3) Create two standard vSwitch, sub-grouping the group of four NIC's into two groups of two NICs and connecting each vSwitch to a different physical switch. The availability issue would have to be assured within each virtual machines that would contain two vNIC's, one connected to each vSwitch. This would get me a 2Gbit/s rate.
4) I use only one vSwitch for all 4 NICs but, connect it to only one physical switch. I then configure the second switch as a standby only switch with same configuration as the active one. This would require manual intervention in case of the failure of the active switch, which would be very unpleasant.
This is the help I require. Is there anything I'm missing ? Is there anything else that I can do to try to take advantage of our current infrastructure ?
If you've made it this far, thank you for your patience .
Thanks to all,
Rui Santos