Why do certain multicast applications cause performance issues in a dense mode multicast environment?


Wed, 07/22/2009 - 19:51
Jun 22nd, 2009
User Badges:
  • Gold, 750 points or more

Core Issue

Certain multicast applications use progressive Time to Live (TTL) values to find where clients and servers are located. They choose a TTL value automatically so the multicast application traffic from the server can reach the farthest client on the network. When such an application runs over a dense mode network such as Protocol Independent Multicast - Dense Mode (PIM-DM), it can cause high CPU utilization on some of the routers forwarding the multicast traffic. This results in performance issues in the network and also forwards multicast packets to networks that do not have any receivers.


These problems are common with dense mode networks, since they operate using flood and prune behavior. To overcome this problem, perform one of these procedures:

  • Set the TTL value manually on the server if the application allows. Configure the router to forward only packets above a certain threshold TTL value.
  • Switch to a sparse mode environment such as Protocol Independent Multicast - Sparse Mode (PIM-SM).

For more information, refer to the Using Applications with a Low TTL in a Dense-Mode, Multicast Environment application note.



This Document

Related Content