We have application of size 500k and cct of speed 256K.How will I caculate ,that how much app/data can be send in one hour from remote site to HQ.
It's really dependent on a lot of factors including, but not limited to, protocol, packet size, delay, other traffic, etc. I assume by 500k you mean 500 kilobytes and 256K is obviously 256 kilobits per second. Assuming a perfect world and a clean pipe with no other traffic you can always do a rough estimate by taking 500KB, or 512000 bytes, multiply by 8 to get 4096000 bits, then divide by 256kbps, or 256000bps, to get 16 seconds.
One thing that doesn't account for is any overhead from packet headers or anything of the sort listed above. You might get a little more precise by assuming an MTU of 1500, IP, and TCP, which will give you an overhead of 40 bytes per packet. 1500 - 40 = 1460. Again, assuming
that we will maximize our utilization, you then take your 512000 bytes / 1460 bytes per packet to give you 350.7 packets. Just go ahead and round that
up to 351. Your layer 2 WAN protocol comes into play here as well. Let's assume it's HDLC using the maximum size frame format which would be 8 bytes. Your total frame size will then be 1508 bytes or 12064 bits. Now take 12064 bits * 351 frames = 4234464 bits. Then 4234464 bits / 256000 bits per second = 16.5 seconds. As you can see, the header overhead adds significantly little in your case. However, factors such as TCP windowing, slow-start, propagation delay, circuit utilization from other traffic, etc. will have far greater impact on your transfer.
For practical purposes and quick calculations for the purposes of providing information to management and other groups I will do the rough estimate as I did at first and then double it. That generally keeps me well-covered as long as I also explain the caveat that many other factors can affect the throughput of that data and there are no guarantees.
Tyler West, CCNP