cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1529
Views
0
Helpful
7
Replies

forwarding latency for various switches

STUART KENDRICK
Level 1
Level 1

i want to quantify the latency in my network, as part of an effort to determine whether or not we can deploy a new application

so i've grabbed my Finisar THG box (a hardware packet sniffer with an internal clock accurate to 20ns), a stack of various switches, and some cables. i plug the two ports of the THG box into a switch, send 1,000 pings at a specified internval from one THG NIC through the switch to the other THG NIC, subtract the packet insertion time, average the resulting pile of numbers, and come up with a figure for the forwarding latency (aka decision time) of the device. see my results below

now i want a sanity check. Cisco must perform this same test (possibly with fancier hardware, like SmartBits boxes) routinely on their gear ... where do they post these results? i've been poking around www.cisco.com without success

for interest, here are my numbers:

Catalyst 4003 100BaseT ports: 3170ns

Catalyst 4003 1000BaseSX ports: 705ns

[same forwarding latency for 64 byte and for 1518 byte packets]

Cat 4503 1000BaseSX 64 byte: 3300ns

Cat 4503 1000BaseSX 1518 byte: 7120ns

[why the change in forwarding latency depending on packet size? remember, i've already subtracted packet insertion time]

Cat 6506 1000BaseSX 64 byte: 5000ns

Cat 6506 1000BaseSx 1518 byte: 7120ns

Datacomm Aggregration tap 100Mb: 320ns

In-Line Finisar 100Mb tap: 0ns

NetGear 100Mb hub: 330ns

and finally, we ran a test across our production network (which translates into two access-layer Cat 4506s, two distribution layer Cat 6506s, one core layer Cat 6506, plus ~500m of cabling ... and came out with ~20us of latency, exclusive of packet insertion time. good stuff

-where does Cisco post the numbers they have recorded?

-with whom could i have a conversation about what drives fowarding latency in different Catalyst models? why, for instance, different packet sizes change decision time, in some models but not in others?

i'm wanting both a sanity check and a deeper understanding

--sk

stuart kendrick

fhcrc

7 Replies 7

jackyoung
Level 6
Level 6

Normally, some third party will carry those test per box, e.g. tolly group. I believe there is such test within Cisco internally before they release the product. However, those result may not open to public, so we have to wait the test report from third party or carry it via your service provider.

Just learn from another Netpro that there is a Safe Harbor service from Cisco to similar the complex environment and test it in a lab. Please check below :

http://www.cisco.com/application/pdf/en/us/guest/netsol/ns504/c643/cdccont_0900aecd802c08fc.pdf

got it, i can see how 'Safe Harbor' would be an option for me, assuming i could bear the cost. thanx for the pointer.

--sk

scottmac
Level 10
Level 10

I don't believe you'll ever see those numbers published from a vendor (Cisco or any other).

Think marketing: With hard numbers being public information, the Marketing folks have a lot less "wiggle room" when producing their materials. It provides a hard/fixed target for the competitors, for their engineering and marketing.

Think back to ~1984 when Kalpana introduced Ethernet switching. Their product (a cut-through switch) produced latencies in the ~20s when everyone else's store & forward switches were in the 90s.

What happened? The marketing folks from the other companies (including Cisco) re-defined the term "Latency" in their marketing materials such that it favored their product (or at least made it look like a less-significant difference). It was wild .... some wanted it to mean firstbyte in/first byte out, others wanted first byte in / last byte out ...and everything in-between ,,, whatever worked for their product.

Cisco even came up with "Frag-Free switching" to capture the market generated by all the other vendors FUD campaign ("the fast switching of cut-through{sorta}, with the assurance of runt/frag-free switching offered by S&F").

Even numbers from the third party groups can be suspect. Our Lab (I worked at a different place then)had much of the same equipment used by the thrid-party groups ... and we can up (in some cases) with similar numbers (and sometimes different numbers)... without the spin, the products usually didn't produce the same happy results.

When the manufacturer is paying for the third-party to do the testing, the analysis tends to have a much more positive lean to it.

To finish it all off, in most cases, performance (assuming it's within acceptable limits) is only a small part of the purchasing decision. In many/most cases, pre/post sale support, stability and longevity of the manufacturer, implementation costs, interoperability, post-installation management, and other factors take the forefront of "why do I wanna buy this thing from those guys."

Meaning, would you rather buy the hottest box from a new company that may|may not be around next year, that may|may not integrate well with existing infrastructure, that may|ususally doesn't have a product that doesn't intergrate with the existing management platforms and may|may not have a decent support organization .... OR buy from a company that doesn't have the fastest box, but it's reliable, integrates well, has a good support organization, and has proven longevity (it'll be here next year) ... and the other (mostly positive) attributes?

Sorry for the long post, but it's an issue that's more complex than performance stats, and (to get back to the original point) Marketing likely requires some "flexibility" in their marketing materials and generally doesn't want hard specs beyond the usual packets-per-second, backplane throughput, and MTBF.

FWIW

Scott

hi scott,

i hadn't considered these issues. however, after reading your text, i can see how a vendor would be relucant to publish such numbers ... (a) because they may look bad, and (b) because the competition may re-use them inaccurately. the behavior of hiding data smells putrid to me, but there we are: vendor marketing departments don't consult my nose when they make such decisions

ok, so i'm not likely to get this information out of equipment manufacturers ... and the 'independent' reports are likely to contain biases as well.

a Usenet poster pointed me toward an EANTC report on the C6K (http://www.eantc.com/fileadmin/eantc/downloads/test_reports/2003-2005/EANTC-Summary-Report-Cisco-GigE-Catalyst6500-Supervisor720.pdf) which describes 12us forwarding latency for 64 byte packets ... same ball park i'm seeing ... that's reassuring to me

-do you know what happened to the Harvard Network Device Test Lab? seemed to me like a reliable source of 'independent' measurements ... but as far as i can tell, it is defunct now

-yes, for me too, forwarding latency wouldn't matter in a purchasing decision

-myself, i'm focussed on predicting application performance on an existing network

thank you for taking the time to post on this issue

--sk

I don't know what happened to the Harvard Group, it's been a while since the Lab has been my primary assignment.

Paid third-party testing *can* still be useful, especially if they post their numbers ... frequently you can detect "diplomatic" speak in the analysis which can point to areas of the stats that bear closer examination ... it comes down to 'What they say" and "what they don't say" (or how they say it).

An RFC was created for how devices should be tested i.e., what tests should be performed, what the setup should look like, and how the results should be reported. I don't recal what the RFC number is .... if I can find it I'll post it.

If ou have the budget, and you plan on doing performance prediction / modeling / testing for a while, you may want to look at Chariot. Chariot is software only and tests the complete network, including client platforms, operating systems, stacks, nics, and active infrastrucure.

Software is loaded on client endpoints. Each client can support a number of "streams," each stream is an emulated flow that can emulate virtually any kind of traffic (SAP, FTP, VoIP, telnet ...it's a very long list).

Streams can be paired between any endpoint ... so you could have one client send four streams to a server (like Web, DB, FTP, RPC) while sending another couple streams to other endpoints (like IM, SNMP, chat ...)

All of the streams are triggered into operation by the console station to synchronize the timing.

When it's all done, Chariot produces a complete report with all of the parameters necessary to exactly replicate the test (in case someone else want to independently verify your results).

Common reported parameters are throughput, latency, packet drop, transactions per second, and link efficiency. Again, this is for the entire network, not just the infrastructure ... nice information to have, since it also permits you to tune your service resources and client operating parameters as well as the raw infrastructure.

A "mini" version is available as "QCheck,"for free ... I think IXIA is making it available. Qcheck will give you throughput, latency, and packet drop for one endpoint pair using TCP, UDP, and possibly IPX (I haven't looked at the current version).

As far as the reported numbers, 64byte packets are used for raw throughput, since they give the largest possible number and indicate the speed of the switcing engine. Large packets are used to test the buffering / queueing efficiency and capacity.

Good luck with your testing ...

Scott

I found the RFCs:

RFC 2889 covers benchmarking methodology for network switching devices

RFC 2544 covers benchmarking methodology for internetworking devices

Also, is the "Harvard" group you referenced possibly really the testing group at University of New Hampshire? (UNH interoperability Lab?)

Check 'em out

Scott

Thanks for those information. IMO, if you are continue to test or evaluate the equipment performance, it is better to leave it to the 3rd party tester to prove it.

SK, could you mind to share the primary reason to have this test ? I am afraid the effort to test it that may be higher than the cost to purchase the equipment or a large portion in the budget.