Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity to learn about designing, installation and deployment of blade servers and associated network components with Cisco expert Ken Coley. Ken is a technical marketing engineer for the Cisco Data Center Business Unit and is responsible for Cisco's Ethernet blade switches. He works closely with our blade partners such as Dell, HP, and IBM. During his nine-year career at Cisco, Ken has provided technical marketing support for a variety of products including wireless wide area access and industrial Ethernet applications. He has a wealth of experience ranging from hands-on network operations to supporting deployment of multisite topologies.
Remember to use the rating system to let Ken know if you have received an adequate response.
Ken might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through March 7, 2008. Visit this forum often to view responses to your questions and the questions of other community members.
When deploying a blade server with multiple Cisco switches internally (the 3020 for HP in my case) what is the best practice for connecting these switches together? Is there even a need for this?
Generally (90% or greater), customers do not use the internal cross connects on the HP enclosure. The IBM enclosures don't even have this feature. Most customer choose to use the Cross or U design for connecting the internal switches to the upstream Aggregation layer. Cross design takes half of the uplinks and sends them to one upsteam switch, and the other half to the other upstream. The U design runs all uplinks from one internal to one upstream, and all the uplinks from the other internal switch to the other upstream switch. In this design the switches are independent. The servers see two different switches. Servers should be configured for NIC teaming. Generally, I split the backplane in half and have half of the servers send data to left switch and half to right switch. Then let the other switch act as standby. This lets you use both switches at the same time. Also, We have a feature called Layer 2 Trunk Failover (Link State Tracking), that monitors uplinks and drops the downlinks to servers if switches loose their uplink connections.
I am looking at the Cisco Fibre Channel Switch for HP, and I have a limited number of Domain IDs, how do I deploy it?
Bill, you can place the FC internal Blade switches into NPV mode. In this mode, the Switch will look like an HBA to the upstream MDS. This will allow you to save domain IDs. Also, if you use VSANs then you can break up your datacenter into clusters and re-use the IDs. Thanks another option to you.
All blade switches have an internal connection to the chassis manager. For IBM, the chassis manager is called Advanced Management Module (aMM), for HP its Onboard Administrator (OA), and for Dell, its Chassis Management Controller (CMC). Cisco Ethernet switches have internal 100 Mbps connections to these controllers. For IGESM, this was implemented as a switchport (interface gig0/15 and gig0/16), and the VLAN 1 interface is configured to be managed by the aMM. There is a keyword âmanagementâ that identifies this. For all other blade switches, the management interface is driven off the switch CPU and called interface Fa0. If you want to have your switches managed internally, then use these interfaces and make sure your âip default-gateway x.x.x.xâ points to the router attached to the Chassis management network. If you want to have in band management, you can create another management VLAN (SVI) and assign an IP address to it. Also you will need to change your âip default-gatewayâ command to reflex the router attached to the network the switch uplinks are on. You can not have both at the same time so you have to pick. Personally, I like sending my management traffic on the out of band management. It is more secure, doesn't interfere with user traffic, and is available in most cases even if switch fabric is down.
Do you have any experience of HP's "Virtual Connect" technology and what the pros/cons are as opposed to a Cisco blade switch?
This is a good topic that would require a lot of explanation. Basically, the trade off is simplicity versus flexibility. VC creates an easy tool for the server admin to manage HP's embedded switches. However, it only works with HP switches. Troubleshooting, will be a problem, because they have very little tools for diagnosing problems.
I am running VMWare, and they tell me I need 6 or 8 NICs on my servers. How do I do this?
In the Blade Server area, adding NICs is not as simple as in Rack mount servers. Generally, Blade Servers have a limitation of two or three daughter card slots. If you are using Fibre Channel for storage that is going to use up one of those slots, which leaves you with 1 maybe two additional slots. Regardless, of the number of NICs you physically have on the server, you can create trunk ports from from the physical server to the physical switch. VMware implements a virtual switch within the hypervisor. You can construct Virtual NICs per Virtual server and map those to the uplinks (physical server NICs) trunks. Each trunk can carry more than one VLAN and therefore, more than one Virtual NIC each.
I was planning on ordering M 3130x switches and I was wondering if VACLs (vlan access maps) were a supported feature. Thanks in advance.
Starting with 12.2(44)SE release, all CBS30x0 products now include IPBase features including VACLs. This feature is NOT supported on IGESM or CGESM.