Just some question that might sound stupid at first:
What would about be the possible gross data rate on a 10GBit fiber link (e.g. 3750 uplink to a 65xx via singlemode fiber)?
As far as I understand, all of the 10gig fiber optic standards do have in common that they introduce an additional sublayer (XGMII) between the MAC-layer and the medium and on the other hand get rid of CSMA/CD as obviously collision detection is no longer necessary. All higher layers are simply passed through. The question is now how this affects the percentage of usable data rate compared to standard 1.000BASE-T
That would imply that all losses on the higher layers would have to be considered with about the same % like in a standard 100/1.000MBit connection.
This would mean that eg. on the IP-layer I could assume a gross data rate of ~80% of the 10GBit net rate?
Are these thoughts correct?
I know that that kind of a calculation is a good guess at best and that there are so many factors to consider that I can't even begin to mention them here, but I hope to get just a feeling for the whole thing.
Any ideas about this?