cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2637
Views
0
Helpful
13
Replies

Serialization delay/link speed?

jsailers
Level 1
Level 1

I've been reading documentation on Fragmentation and Interleaving from the cisco site, and have a question about serialization delay. I understand the concept and the formula to derive the serialization delay based on MTU size and Link speed. However, my questions revolves around what is considered "link speed" in terms of serialization delay. On a frame-relay circuit, I might have an access-circuit speed of 1536K, but the CIR of the PVC is only 768K. So, when I'm deciding whether or not to use LFI, should I use the Port speed or the CIR of the PVC? I have the same issues where I have an ATM DS3 but 768K VCs going to my remote sites.

13 Replies 13

cchsu
Level 1
Level 1

you should always use the CIR

I began thinking that after I posed the question, but then I started wondering if the serialization delay is introduced on the PVC queue or in the port queue, and I didn't even know that the PVC had it's own queue. Is this the case? Am I understanding serialization delay correctly? Is it the amount of time it takes a frame to leave an interface... or the time it takes to leave a queue. Because the interface itself is 1536K, but the PVC queue is only 768K as I mentioned before? Thanks for your help.

The serialization delay is introuced not in the queue, but on the actual interfaces, which is why I said in my above post that you need to shape to the smallest (slowest) PVC.

Queues are in RAM, which is moves very quickly.

Thanks for your comment saridder. So let me make sure I understand correctly. The serialization delay is at the interface...right. Is it the physical interface, or the sub-interface that my PVC is on. If it's the physical interface, why would I need to look at CIR? If it's at the logical sub-interface, what you are saying makes more sense, but the idea of serialization delay doesn't make much sense. It was my understanding that serialization delay is a one-time event on the output of the physical interface. I'm not talking about traffic shaping which I realize should take into consideration the slowest CIR in the path. Can you help to sort out my confusion? Thanks.

It's on the physical interface, not the sub-interface. The clock speed of the physical interface is what dictates how fast the packet can get "sucked" off of it and onto the cloud.

But, when you apply traffic shaping to the various subints., you are effectively sending traffic out at the speed which you shaped it to, so it looks like the subnint. speed.

When traffic shaping is turned on, the packets still leave at the port speed, but in quite short bursts. You still have periods of time where the router is not sending anything, So overall, the speed is shaped/averaged out the CIR, even though traffic still leaves at the port speed.

You say you do not want to include traffic shaping, but you can’t fragment without traffic shaping turned on, so you need to keep it in mind when trying to picture it. I hope I am explaining it clearly, as I'm not good at it sometimes.

I think I understand now. What you are saying does make more sense to me. What I'm picturing the process to be is this: Packets leave the physical serial interface first, but then it has to go through the PVC Traffic shaping process. After the traffic shaping process is completed, the overall throughput on that PVC will only be up to the maximum configured bandwidth (configured through traffic shaping commands). So, since the traffic has been throttled down to the CIR, now the packet has to be serialized, but only at that speed. Is this correct? That actually makes sense to me.

Pretty much, but the order is mixed up. The packets leave the queue first via the traffic shaping parameters, in bursts, then hit the interface and are sent (at the interface clock speed which is 1554000bps.) But remember, it is only a burst for a fraction of a second, so the traffic overall is leaving at the traffic shaped speed if you look at it in the whole picture.

Picture the physical interface as a diving board. You can contol how fast and how many people can come up to the diving board, but once someone jumps off, you have no control.

Thanks Steve. That helps tremendously. Great illustration too. But, now my thought is, since the last place the packet hits is the physical interface which is 1544K, wouldn't the serialization delay still use the actual clock rate of the physical interface? I mean if serialization is the process of placing frames onto a Layer 1 medium (T1 Line), regardless of how throttled down the bandwidth is, the actual process of placing the bits on the interface should be still at the port speed of 1544K. Does what I'm saying make sense? I'm concerned about the amount of time in ms it takes for a frame to leave the physical interface, and then to arrive at the next interface, right? Even though the bandwidth has already been throttled down to the CIR, it shouldn't matter from the perspective of the physical interface, because it still has the capability of serializing frames at a rate of 1544000 bps.

Yes the individual frames on a T1 leave at the 1554000/s speed, but if you traffic shape and send traffic in bursts, and leave spaces of empty time in between (no data being sent), over a period of one second the 1554000/s effective bandwidth becomes lower. Clock-rate, which is speed measured in fractions of a second, dosen't have to equal bandwidth, which is volume measred in seconds.

So over 1 second, you are sending traffic to about what you shaped it for. That's why even though you can send at 1554000, you aren't really if you traff shape. You have to look at the big picture in terms of traffic shaping in order for it to make sense.

I checked on of my "bibles", Integrating Voice and Data Networks by Scott Keagy (Cisco Press) and I will quote from the book, which maybe can explain it better than I can.

"In the absence of traffic shaping, the effective clock rate is simply the clocking rate of the physical interface, which is equal to the Frame Relay port speed. An interface with traffic shaping rapidly alternates between periods of transmission and pausing, which lowers the effective clock rate to the CIR configured in the traffic shaping process…”

So, the total *effective* rate is what you shape it to be, regardless of the clock rate. I think effective is the key work here.

I do have to correct my earlier statements (from everything I just said, I heard it from Wendell Odom himself in person a month or so ago). When I said that you need to shape to the lowest CIR, I was incorrect. The speed at which a packet goes out is the access-rate, in this case the T1 speed. Therefore you don’t need to worry about other, faster PVC’s with their large fragmented packets slowing down a slower PVC because the packets get carried out at the access-port speed. So you were correct!

You still fragment for the CIR. But why do you fragment for the CIR? Presumably it is because of the egress port of the frame switch and speed at the destination. It (presumably) will not have the same speed as the hub router (because why order a PVC that can burst if you have voice on it. You wouldn’t).

Thanks for the great info. I'm going to have to purchase that book you referenced. So, what you saying is that the serialization delay should be calcuated based on the Access-port speed. So if all the documentation I read says that there is no need to fragment packets if the port speed is 1544K or higher, I would not need to since I have a T1 interface. Is that a correct statement?

Serialization delay (ms) = frame size (bits)/link bandwidth (bps).

(1500 Bytes * 8 bits/byte)/1544000 bps = 7 ms

1500 Bytes being the MTU of the serial interface.

Is my math correct? and does that mean I don't need to use fragmentation since the MTU will give me a 7 ms serialization delay?

The reason all this is so important to me is that I have a scenario where I have a Frame to ATM Interworked WAN where I have an ATM DS3 at my Central site and Frame T1s at my remote sites. Because my ATM interface can't do FRF.12, I can't use that fragmentation feature. I could use MDP LFI, but the coniguration is much more complex, and I'm trying to stay away from doing it if I can.

Steve, I just want to thank you again for all your input and patience with me on this matter. I'm trying to quickly integrate a VoIP solution and you have been very helpful.

I belived you still select packet fragment size based on the CIR, but you don't have to worry about it in the hub site if the access-speed is high enough. The opposite side will still have serialization problems if the packets are too big becuase they run at the slower speed.

A 1500 byte packet on a T1 will have a delay of about 7.5, so you are correct.

Just to add to his statement, you should always use the lowest CIR. If you have a T1 with at 256, a 512 and a 768, you have to fragment all pvc's to the smallest CIR, which in this case is 256k.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: