I am wondering if anyone has any experience/advice on the best way to accomodate the HP Virtual Connect (VC) modules into our Cisco DC environment.
We have a new HP chassis about to go in with two VC modules. It is my understanding that these modules, although physical, are just like the soft switches in the VMWare environment. In other words, they are not traditional L2 networking devices. They don't pass BPDU's, the appear to the upstream switch as a host, etc. They can't do cross-stack LACP, and load balances across VC modules using a "nic-teaming" like feature that can take up to 5secs to failover.
I have two specific questions. First, I am wondering if it makes sense to connect these to the access layer rather than the distribution layer (which was my original intent). Not sure I feel comfortable connecting an non-networking device directly to the distribution layer. What are the pros/cons of going to the distribution layer with the VC modules?
Second, If going to the access layer would a 3750 stack be sufficient for redundancy. The idea would be to split the uplinks across the stack members. As opposed to going into separate physical Cat6ks.
Long term we plan to migrate to the Nexus platform with N7ks distribution, 5k end-of-row, 2k ToR. I want to make sure we have a consistent server access architecture and whatever we implement now and will migrate as easy as possible to the future DC architecture. Even if it means having the server team order traditional layer2 switch modules for the HP chassis.
Any feedback on what others have done and/or advice is appreciated.