Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Blade switch power and cooling

Considering we are facing power and cooling issues with blade servers, does it make sense to use integrated blade switches, which are going to increase the power draw for the blade chassis, or use high density Top of Rack switches and connect with a pass through?


New Member

Re: Blade switch power and cooling

Hi Bill,

An interesting and important discussion.

Based on the fact that most blade switches are pretty low power (45W for the IBM Cisco blade switch and 60W for the current HP Cisco blade switch), there would be minimal if any savings with going with pass thru over internal switching, and may even favor internal switching.

As an example, a rough power calc of 3 IBM BladeCenters in a rack would be something like this:



Internal Blade switches to 6500 dist:


3 IBM BladeCenters in a rack, each with 2 IGESMs = 6 IGESMs x 45W each = 270W in the rack.

4 uplink ports per blade switch times 6 switches = 24 ports required in upstream 6500. A single 6748 line card is rated around 300W (approx), so if you divided the power usage in half (since only using 24 and not 48 of the ports) = 150W in the 6500 needed to connect to blade switches:

150W (6748) + 270W (blade switches) = 420W

(Note that I'm not saying that if you use half the ports of a 6748 that you only use half the watts, this is just for calculation of the difference between PT and internal switching)



Pass thru to ToR 4948 to 6500 dist:


3 IBM BladeCenters in a rack (14 servers in each, 2 NICs per server = 84 server ports), with pass-thru to two 4948-10G (300W each) ToR switches = 300W x 2 = 600W in the rack (PT modules probably have some value (they do have components) but small, so we'll just say zero watts for PT modules).

4 x 10G uplinks (2 out of each 4948) going to a single 6704 line card (around 300W)

300W (6704) + 600W (4948's) = 900W



Pass thru to 6500 dist


3 IBM BladeCenters in a rack (14 servers in each, 2 NICs per server = 84 server ports), with pass-thru, again, we'll say zero watts for PT modules.

Two 6748 line cards in 6500 using 42 ports on each card (assuming around 300 watts, and only using 42 of each 48 ports gives around 260W per 6748 to pass thru connections)

260W * 2 (6748's) = 520W


So in this case, the internal blade switching is the lowest of each option.

Keep in mind these are really rough numbers, and could be recalculated based on uplink /bandwidth requirements out of the rack, different modules/switches, etc, but the end result is that internal switching does not necessarily equate to increased power requirements for a deployment.

In my own experience, the decision to go pass thru or internal switching is usually other factors, such as a desire to reduce rack cabling, or server bandwidth requirements, or network teams struggling with the increased number of switches to integrate and manage, etc. So there are definitely many reasons to consider one over the other, but green has not been a big one I've actively encountered with blade switching to date.

Thanks, Matt