Coming soon - Get a detailed view of why an account is flagged as spam!
view details
14
Multicast Primer pt.2: MROUTES and PIM-DM (xpost /r/ccna)
Post Body

MROUTES and PIM dense mode

See also:part 1 - IGMP

In part two of our Multicast series, we’ll talk about how multicast traffic is routed between different network segments. Back in part one, we talked about IGMP, used by hosts to notify the multicast routers on their network segment that the host wishes to consume or receive multicast traffic. Once a router is notified a host wishes to receive traffic, the router must be able to effectively route this across the network. Nearly all modern networks that support multicast achieve this with a protocol called PIM, Protocol Independent Multicast. We’ll spend the next few articles talking about the various mode PIM can use, and how it works.

PIM is a specific routing protocol used to establish multicast routes, or mroutes, between hosts to distribute multicast traffic. The “Independent” portion of PIM means that it can operate independent of the regular Unicast IGP. That means we can run PIM with OSPF, EIGRP, RIP, ISIS, or even static routes. Like most IGPs, PIM is enabled per interface and establishes neighbor adjacencies between directly connected routers. Also like IGPs, PIM can be enabled in passive mode on interfaces that wish to send or receive multicast traffic, but that have no additional routers. We’ll take a quick look at the overall concept of multicast routes, and then at the first PIM mode called “PIM dense mode”.

MROUTES

Multicast routes can be displayed with the command “sh ip/ipv6 mroute”. The output begins with a somewhat similar output to the standard ipv4 routing table output, starting with a list of flags and other information, but mroutes end up looking quite different from unicast routes. Here are two examples.

(*, 239.0.0.1), 00:07:20/stopped, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.34, Forward/Sparse-Dense, 00:07:20/stopped
    GigabitEthernet1.23, Forward/Sparse-Dense, 00:07:20/stopped
    Loopback0, Forward/Sparse-Dense, 00:07:20/stopped

(10.0.12.1, 239.0.0.1), 00:00:03/00:02:56, flags: LT
  Incoming interface: GigabitEthernet1.23, RPF nbr 10.0.23.2
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 00:00:03/stopped
    GigabitEthernet1.34, Prune/Sparse-Dense, 00:00:02/00:02:57

Multicast routes come in two flavors; one of each is listed above. The first kind is referred to as a (,G) mroute, which means “any source to a specific group”. The second kind is referred to as a (S,G) mroute, representing “a specific source sending to a specific group”. In our examples here, our (,G) simply indicates traffic going to the group 239.0.0.1, while our (S,G) shows traffic from the sender 10.0.12.1 to the same group.

In both examples we have a series of status flags, a single incoming interface, reverse path forwarding check information, a single incoming interface, and 0 or more outgoing interfaces. In our case, our (*,G) route has a null inbound interface (we’ll talk about why later), and 3 outbound interfaces, G1.34, G1.23, and Loopback0. Our (S,G) has an incoming interface of G1.23, expecting to receive traffic from neighbor 10.0.23.2, and two outbound interfaces, Loopback0, and Gig1.34. You might notice that output interface G1.34 is in the “prune” state, while all the others are in the forwarding state. We’ll address that in a bit as well.

MROUTE loop prevention

Before going any further, we should talk briefly about MROUTE loop prevention and the RPF check. Multicast routing works in conjunction with unicast routing to avoid multicast routing loops. This is done by performing a reverse path forwarding check for all incoming interfaces. Let’s take a look at our (S,G) route as an example.

(10.0.12.1, 239.0.0.1), 00:00:03/00:02:56, flags: LT
  Incoming interface: GigabitEthernet1.23, RPF nbr 10.0.23.2

In this example, we are receiving traffic from 10.0.12.1. If we take a look at our unicast routing table, we’ll see that host 10.0.12.1 is reachable via interface G1.23, neighbor 10.0.23.2.

Routing entry for 10.0.12.0/24
  * 10.0.23.2, from 2.2.2.2, 00:20:08 ago, via GigabitEthernet1.23

By default, our router expects any multicast traffic originating from that source IP address to come from the same neighbor and interface that we would use if we wanted to send unicast traffic BACK to that IP. This is the essence of the RPF check and the basis of loop prevention.

sh ip rpf 10.0.12.1
RPF information for ? (10.0.12.1)
  RPF interface: GigabitEthernet1.23
  RPF neighbor: ? (10.0.23.2)
  RPF route/mask: 10.0.12.0/24
  RPF type: unicast (ospf 1)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base

Our multicast routing subsystem automatically performs an RPF check for all mroutes and lists the applicable data in the mroute. If we were to receive multicast traffic that matched that mroute on an interface other than what was specified in the RPF check, we would drop it as part of loop prevention! Additionally, a split-horizon style rule exists where the incoming interface can never be listed again as an outgoing interface. Once we get a packet in on any given interface, we can’t send that same packet back out the same interface.

MROUTE most specific matching and forwarding example.

In the unicast world, if multiple routes exist for a given packet, we use the one that is most specific. In the multicast world, if a (,G) mroute AND a (S,G) mroute exist, we use the more specific (S,G) over the less specific (,G).
Looking only at our two example mroutes and encountering a packet with source IP 10.0.12.1 and destination 239.0.0.1, we can determine what steps our router will go through to handle the packet. Because a matching (S,G) mroute exists, the (*,G) mroute is ignored. Given that info, the router is going to check to make sure the packet was received on interfaces G1.23. If the packet came in on any other interface, the packet is dropped and the story ends there. If the packet did come in on G1.23, then the router looks at the outgoing interface list (OIL) to determine what to do. In our case, one copy of the packet will be forwarded out the Loopback0 interface. While G1.34 is listed in the OIL, it is set to the prune state instead of the forwarding state, so no data is sent out G1.34. If both interfaces were set to “Forwarding”, the router would send one copy of the packet to each.

Forming mroutes with PIM Dense mode

Now that we have a basic understanding of multicast routes, we have to determine how they’re created. In our scenario, PIM creates these routes in a response to activity on the network. PIM has two basic modes: dense mode, and sparse mode. Dense mode uses a push and prune methodology, while sparse mode uses a pull methodology. We’ll be exploring dense mode for the remainder of this article.

For our example network, we simply have four routers (R1-R4) in a simple line. R1 will be our traffic source, and Loopback0 on R3 will want to consume traffic. R2 will act as a transit, and R4 doesn’t want any multicast traffic in our example. With dense mode, very little happens in the network until a sender begins transmitting packets. When an IGMP host requests data, the associated router will note this and create a (,G) route, but no other information is shared with neighboring PIM routers. In our case, we’ve added the line “ip igmp join-group 239.0.0.1” to interface L0. This results in a basic (,G) being created on the directly connected router, but no changes to other routers in the network.

R2#sh ip mroute
(*, 239.0.0.1), 00:07:20/stopped, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.34, Forward/Sparse-Dense, 00:07:20/stopped
    GigabitEthernet1.23, Forward/Sparse-Dense, 00:07:20/stopped
    Loopback0, Forward/Sparse-Dense, 00:07:20/stopped

R1,3-4#sh ip mroute
<blank>

We can now begin generating traffic on R1 with a simple ping command.

Ping 239.0.0.1 source g1.12 repeat 3
Reply to request 0 from 3.3.3.3, 89 ms
Reply to request 1 from 3.3.3.3, 10 ms
Reply to request 2 from 3.3.3.3, 9 ms

Our ping is successful and now we have mroute information

R2
(*, 239.0.0.1), 00:01:10/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.23, Forward/Sparse-Dense, 00:01:10/stopped
    GigabitEthernet1.12, Forward/Sparse-Dense, 00:01:10/stopped

(10.0.12.1, 239.0.0.1), 00:01:10/00:01:49, flags: T
  Incoming interface: GigabitEthernet1.12, RPF nbr 10.0.12.1
  Outgoing interface list:
    GigabitEthernet1.23, Forward/Sparse-Dense, 00:01:10/stopped

R3
(*, 239.0.0.1), 00:07:20/stopped, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.34, Forward/Sparse-Dense, 00:07:20/stopped
    GigabitEthernet1.23, Forward/Sparse-Dense, 00:07:20/stopped
    Loopback0, Forward/Sparse-Dense, 00:07:20/stopped

(10.0.12.1, 239.0.0.1), 00:00:03/00:02:56, flags: LT
  Incoming interface: GigabitEthernet1.23, RPF nbr 10.0.23.2
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 00:00:03/stopped
    GigabitEthernet1.34, Prune/Sparse-Dense, 00:00:02/00:02:57

R4
(*, 239.0.0.1), 00:01:20/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1.34, Forward/Sparse-Dense, 00:01:20/stopped

(10.0.12.1, 239.0.0.1), 00:01:20/00:01:39, flags: PT
  Incoming interface: GigabitEthernet1.34, RPF nbr 10.0.34.3
  Outgoing interface list: Null

So let’s take a look at what happened

PIM Dense flooding

PIM dense mode works by simply flooding traffic from the source router to all neighboring pim routers. When those routers receive that traffic, they too flood the traffic to all neighbors, and this continues until we reach the end of our network. If a router has no additional neighbors and no local hosts that want the traffic, it sends a prune message back to the neighbor which relayed the traffic, asking that neighbor to stop sending.

In our case, R4 has only one neighbor and no multicast receivers. We can see the (S,G) mroute shows an OIL of “null” and a “P” flag indicating the router wants to prune this specific mroute. R4 sends a prune message via PIM back to R3 saying: “stop sending me traffic for (S,G)”. We see the result of this in R3’s mroute, where the G1.34 OIL is listed as “Prune” instead of forwarding. Once this process is complete, R3 no longer sends matching multicast packets to R4. Since R3 has a receiver, it does NOT want to prune this mroute, so no P flag is listed here. We know R2 also doesn’t have any connected HOSTS that want to receive this traffic, but since it did not receive a prune message from R3, it leaves everything in the forwarding state.

PIM Dense flooding continued and changes in state

This action is the basis for everything with PIM dense mode. PIM-DM makes exclusive use of source-trees and thus only uses (S,G) mroutes. Every time a new sender begins transmitting, packets from that stream are sent across the entire multicast network. Once the end devices receive this data, they can begin the pruning process; when that is complete a shortest path tree from the source to all receivers exists. This means that no mroute state exists until after a source begins transmitting, and flooding of traffic throughout the entire network will occur until pruning/convergence can occur.

What happens when a sender stops sending, if receivers join, or existing receivers leave? By default, a multicast stream is renewed whenever packets are being sent from the sender. If the sender keeps transmitting, we’ll notice the timers in our “show ip mroute” listing are continually refreshed. If the sender stops sending, eventually the mroute will time out across the network and nodes will delete the data. Similarly, the prune messages must be refreshed as well. In some older instances, this meant that approximately every 3 minutes flooding would occur across the entire network again, starting a new pruning session. In Cisco implementation, if the sender is still active as we approach the 3 minute mark, a header only packet is forwarded out all pruned interfaces throughout the network, causing routers to retransmit a prune request and renew the prune timers. This results in much less flooded traffic than simply moving the interfaces into the forwarding state.

What about a change in receivers? In our case, if we removed the static igmp join from R3, that router would no longer have a reason to receive traffic for the 239.0.0.1 group. The next time a multicast packet was received, R3 would generate a prune message towards R2, ending the multicast flow.

R3 leaves the group

R2
(10.0.12.1, 239.0.0.1), 00:00:26/00:02:33, flags: PT
  Incoming interface: GigabitEthernet1.12, RPF nbr 10.0.12.1
  Outgoing interface list:
    GigabitEthernet1.23, Prune/Sparse-Dense, 00:00:09/00:02:50

R3
(10.0.12.1, 239.0.0.1), 00:00:26/00:02:33, flags: PT
  Incoming interface: GigabitEthernet1.12, RPF nbr 10.0.12.1
  Outgoing interface list:
    GigabitEthernet1.23, Prune/Sparse-Dense, 00:00:09/00:02:50

Similarly, a host could join R4 and want to receive traffic 239.0.0.1. In this case, R4 knows there is an active sender (it still has the mroute listed) and would perform a PIM “graft”. R4 would send a join message to R3, asking to negate the prior prune request. If R3 had stopped receiving the source, it would do the same to R2 to fully re-establish the path. Otherwise R3 would simply change the state of interface G1.34 from “prune” to ‘forwarding”.

R4 joins the group

R3
(10.0.12.1, 239.0.0.1), 00:02:26/00:00:32, flags: T
  Incoming interface: GigabitEthernet1.23, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet1.34, Forward/Sparse-Dense, 00:00:20/stopped

R4
(10.0.12.1, 239.0.0.1), 00:02:12/00:00:47, flags: LT
  Incoming interface: GigabitEthernet1.34, RPF nbr 10.0.34.3
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:06/00:02:53

PIM-DM summary

PIM dense mode is probably one of the easiest multicast setups to implement as it requires very little configuration. With that said, it is arguably one of the most inefficient setups. PIM-DM has no system state until devices are transmitting multicast data. Flooding occurs each time a new source comes online. If 100 different sources exist for the same group, 100 different (S,G) mroutes will need to be maintained, and all 100 will flood traffic at some point in their setup process. Many of these drawbacks are solved in PIM Sparse mode, the subject of our next article, at the cost of additional configuration complexity.

PIM-DM config

The PIM-DM configuration is almost too simple to warrant it's own section, but here it is

R3#sh run int g1.23
interface GigabitEthernet1.23
 ip address 10.0.23.3 255.255.255.0
 ip pim sparse-dense-mode  !<--- that's it right there
 ip ospf 1 area 0
end

Author
Account Strength
100%
Account Age
8 years
Verified Email
Yes
Verified Flair
No
Total Karma
144,117
Link Karma
8,472
Comment Karma
134,250
Profile updated: 1 week ago
Posts updated: 10 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
7 years ago