cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2241
Views
1
Helpful
1
Replies

Fragment Free Switch

devang_etcom
Level 7
Level 7

hello,

i am not clear about this sentance "The reason for this is that most errors and all collisions occur during the initial 64 bytes of a packet."

so how fragment free switch came to know from first 64byte there is error or not meanse which kind of algorithm it is useing ...please give me ans or send me some link so i can able to understand the things.

Regards

Devang

1 Reply 1

scottmac
Level 10
Level 10

First a little history:

The first switches (made by Kalpana, who was later bought by Cisco) used what was later called "Cut-Through" mode. Once the switch read the destination MAC, if it knew which port the destination address was attached to, it would immediately create a connection between the ports and pass the traffic.

If it didn't know where the destination address was attached to, it "flooded" that frame to all ports and waited for the destination to respond.

For every frame that it saw, the source MAC was recorded (which MAC on which port) and updated the forwarding tables.

Broadcasts and multicasts were forwarded to every port.

In other words, the switch performed as a multiport bridge. At the time switches came out, they were frequently used to connect segments of hosts that connected to a hub (so it was a very fast multi-port bridge).

Kalpana was the first, and they did it with custom proprietary hardware. Noone else had anything close for quite a while.

The other manufacturers came out with switches, but they didn't (initially) have cut-through technology ... they claimed (this is marketing at it's finest) that the store-and-forward technology they used was superior to cut-through (S&F means the entire frame is accepted, the addresses are evaluated, then the frame is sent out the proper port) because if the frame was a runt or giant, or had errors, it would be discarded and not take up precious bandwidth (remember we're talking 10Mbps at this time).

The problem was that store-and-forward was much higher latency through the switch (Latency was a new term at the time when talking about "hub-like devices" ... hubs have no appreciable latency). There was even a small marketing war about how to define "latency."

Cisco came out with "Fragment Free" switching - meaning that the switch would accept the first 64 bytes, evaluate it, then forward it.

By getting at least 64 bytes, the switch could guarantee the the frame was not a runt (runt is a frame < 64bytes, the minimum permissable Ethernet frame size), and could make sure that the addresses were not corrupt.

The Cisco marketing folks presented it as "the best of both worlds" the "safety" of store & forward, with much less latency ... fairly close to cut-through.

Once Cisco bought Kalpana, cut-through became OK (at least for Cisco) ... and the other vendors (3COM, Synoptics/Bay Networks/Nortel, CableTron and others) eventually came out with cut-through, then it became OK with them too.

Nowadays, the hardware-based store-and-forward is operating at wire speeds with a minimum of latency ... so we again have the "safety" of store & forward, with speeds and latency close enough to cut-through as to call them the same.

The ports speeds are much faster, and we are connecting one host per port in most cases ... technology has indeed moved swiftly ahead from 15 years ago.

There really isn't a secific algorithm ... other than the 802.1d bridging (keep track of the MACs, move the frame to that port, flood unknowns, send broadcasts and multicasts out all ports) ... all the switching magic is done in hardware now ... but what you still have is a very fast multi-port bridge.

FWIW

Scott

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card