According to Cisco's design documentation for the Data Center, the oversubscription ratio at the server access layer should be as close to 1:1 as possible -- maybe 2 or 3:1.
Now, to achieve that kind of low ovsersubscription ratio for, say, a 6509 that hosts 288 servers (6 48-port blades), one would need to have 80-Gbps of uplink bandwidth to the core (8 separate L3 uplinks from the routed access layer, for example) to have a ratio of 3.6:1.
Now, that seems outlandish on its face. The client I am working with has about 300 servers uplinked at 2 Gbps and does not experience any application latency at all. How and why would I recommend that he upgrade to 80 Gbps??
I do understand that this oversubscription methodology is not an exact science. It depends on the types of servers and the volume of application traffic associated with them. But, still, how do I present a model to the client that suggests an added cost of over $44,000 just to implement the uplinks??
Does anyone here have real-world experience in designing a data center from scratch? What considerations were made and what conclusions were drawn regarding oversubscription?
Thank you all ahead of time for your input.