Kevin Dorrell, CCIE #20765

20 Apr 2008

NMC 16.14 Multicast

Filed under: IP Multicast — dorreke @ 12:45

Multicast receivers

A couple of weeks ago, I blogged about the behavior of a router when acting as a multicast receiver.  I am thinking of the sort of situation where the router has an ip igmp join-group on the interface facing the multicast source, but does not need to forward to anywhere.  the questions are: do you need to enable multicast-routing, and do you need to put PIM on the interface.

At the time, I was doing NMC DOiT Lab 14, which was a sparse-mode scenario.  At the time, I came to the conclusion:

If a router is to respond to a multicast ping, and it has ip igmp join-group on the interface facing the source, then ip multicast-routing must be enabled.  PIM is not needed unless the ip igmp join-group is on an interface not facing the source.  There is an alternative to enabling multicast routing: no ip route-cache cef interface facing the source.

Now lab 16 presents an opportunity for the same sort of experiment in dense-mode.  We have two devices that are acting purely as multicast receivers: FRS and CAT2.  What do we need to configure on those to get a response to the ping from R4 and CAT1?

This time, to get a response out of CAT1, it needs all three parts: multicast routing, the pim dense-mode, and the join-group:

ip multicast-routing
!
interface Vlan10
 ip address 160.60.26.6 255.255.255.0
 ip pim dense-mode
 ip igmp join-group 229.9.9.9

I am puzzled about these differences.  However, it could be simple that an SVI on a Catalyst running 12.1 behaves differently from a FastEthernet on a router running 12.4.

Dense-mode with hub-and-spoke

There is one other thing that worries me about this scenario.  We have a hub-and-spoke topology in dense-mode, with R1 acting as hub, and R2 and R3 acting as spokes.  Normally, this shouts out to me “TUNNEL”.  But in this case, we get away with it because both spokes need the multicast group.  I bet if I removed the join from R3 loopbaack, and PIM from the R3-R5 link, then R3 would prune towards R1, and R2 would stop receiving.  In fact, yes, that is exactly what happens:

R4#ping 229.9.9.9
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 229.9.9.9, timeout is 2 seconds:
Reply to request 0 from 160.60.14.1, 481 ms
Reply to request 0 from 160.60.10.7, 489 ms
R4#

A question for the proctor: Do you require this scenario to work if R3 no longer needs the multicast stream?

 

06 Apr 2008

NMC 14.12 Multicast

Filed under: IP Multicast — dorreke @ 10:29

This posting is not very much about NMC 14.12, but a lot about some experiments that grew from it.

Using a router as a multicast receiver:

The standard way of using a router to simulate a multicast receiver is to use the command ip igmp join-group group-addr.  So, you have a router on a VLAN, and you want it to simulate a multicast receiver, but not to forward multicast packets.  Is it enough to put the ip igmp join-group command on the Ethernet interface.  Well … yes and no.  There is a gotcha that I discovered during NMC 14.12.  Here are my notes:

I had a problem getting a response from R6 (a receiver only) using the configuration in the SHOWiT.  I got it going, but only by adding ip multicast-routing on R6.

I am using Ethereal and SPAN or RSPAN to monitor the protocols.  I start by trying experiment with the connection between R3 and CAT2, and seeing how much config I need in R3 for CAT2 to get a response.

  1. Did no ip multicast-routing on R3 but left ip pim sparse-mode in place on R3-F0/0.   I can still see PIM Hellos (30 seconds) and PIM Bootstraps (60 seconds) from R3.  The bootstraps still carry 172.16.101.1 as the bootstrap router.  Also, and IGMP membership report with 0.0.0.0 every 60 seconds.  However, the show ip mroute on R3 is empty.
  2. Removed ip pim sparse-mode from R3-F0/0.  This killed the PIM stone dead, as you might expect.
  3. Added ip igmp join-goup 239.10.10.10 on R3-F0/0.  This generates an IGMP Membership report.  BUT, it does not repeat.  Strange!  Removing it produces an “IGMP Leave Group”.
  4. With no ip multicast-routing on R3 and no ip pim sparse-mode on R3-F0/0, but ip igmp join-group 239.10.10.10 on R3-F0/0, I can see the ping 239.10.10.10 from CAT2, but R3 does not respond.  So how is R6 supposed to work in the scenario?

So, now I put back the multicast routing stuff in R3 and concentrate on R6.

  1. Starting with no multicast config in R6, it is obvious I will see no response form it.  I do not see it in the show ip igmp snooping group on CAT1 (if I use it … the AK has static MAC forwarding for the multicast).  If I ping from CAT2, I see no ping on R6.
  2. In the AK, R6 has no ip multicast-routing, has no ip pim sparse-mode on F0/0, and has ip igmp join-group 239.10.10.10 on F0/0.  This does not work for me.  I can see the IGMP Membership report from R6, and I can see CAT1 adding it to the show ip igmp snoop group table. I can see the ping from CAT2.  But I do not get any response form R6.
  3. If I add ip multicast-routing on R6, then I get a response.  OTOH, there is no need for any PIM on R6-F0/0.

Conclusion: If a router is to respond to a multicast ping, it must ip igmp join-group on the interface facing the source, AND ip multicast-routing must be enabled.  PIM is not needed unless the ip igmp join-group is on an interface not facing the source.

Follow on:  These is another way to solve this problem, and it is the one in the SHOWiT, and which I had overlooked: no ip route-cache cef on R6-F0/0.  The ip multicast-routing is then not necessary.  Why this makes a difference, I have not the faintest idea.  Just quirky behavior … a magic bullet.  They might at least have mentioned it in the AK!

The first few multicast pings

I know that in a sparse mode scenario you should discard the results of the first two or three pings.  But I want to be able to explain them.  Here are the results from this lab:

CAT2#
CAT2#! =================> First ping
CAT2#
CAT2#ping 239.10.10.10
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.10.10.10, timeout is 2 seconds:
Reply to request 0 from 172.16.31.3, 4 ms
Reply to request 0 from 172.16.26.6, 228 ms
Reply to request 0 from 172.16.124.2, 220 ms
Reply to request 0 from 172.16.124.4, 212 ms
CAT2#
CAT2#
CAT2#! =================> Second ping
CAT2#
CAT2#ping 239.10.10.10
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.10.10.10, timeout is 2 seconds:
Reply to request 0 from 172.16.31.3, 4 ms
Reply to request 0 from 172.16.26.6, 80 ms
Reply to request 0 from 172.16.124.4, 76 ms
Reply to request 0 from 172.16.26.6, 68 ms
Reply to request 0 from 172.16.124.2, 60 ms
Reply to request 0 from 172.16.124.4, 52 ms
Reply to request 0 from 172.16.124.2, 44 ms
Reply to request 0 from 172.16.13.1, 32 ms
CAT2#
CAT2#
CAT2#! =================> Third and subsequent pings
CAT2#
CAT2#ping 239.10.10.10
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.10.10.10, timeout is 2 seconds:
Reply to request 0 from 172.16.31.3, 4 ms
Reply to request 0 from 172.16.26.6, 64 ms
Reply to request 0 from 172.16.124.2, 56 ms
Reply to request 0 from 172.16.124.4, 48 ms
Reply to request 0 from 172.16.13.1, 32 ms
CAT2#
CAT2#

On the first ping, I get no response from R1, althogh it is evidently forwarding the ping to R2 and R4.  I wonder why.

On the second ping, I get duplicate responses from R2, R4, and R6.  This is evidently because of the loop.  R6 has two routes to the source, so it believes the ping from both R2 and R4.  It has not sorted out yet which one it prefers.  So it also forwards each ping to the other.

From the third ping onwards, everything is sorted, and you get one response from each router. 

 

10 Mar 2008

11.13 Multicast MVR

Filed under: IP Multicast — dorreke @ 21:56

Spent an hour or so trying to get MVR working.  I did manage to get a response in the end, but the whole feature does not seem to be very reliable.  One minute it works, one minute it doesn’t.  I tried  running it over a trunk, but that did not seem to work at all, and just got me a spurious entry in the MAC forwarding table that I cannot shift:

CAT2#show mac-address-table multicast
Vlan    Mac Address       Type       Ports
----    -----------       ----       -----
  10    0100.5e0b.0b0b    USER       Fa0/7
  40    0100.5e0b.0b0b    USER       Fa0/4, Fa0/7

After that, the original ping would not work until I removed the config, shut and no shutted a few port, said a few magic words (swore at it).  Then it worked again.  But the spurious entry is still there.

I really do not think this is going to come up in the exam.  If it does, at least I know where to look for it in the doc.

07 Mar 2008

Frame-Relay multipoint or physical interfaces

Filed under: Bridging on routers, EIGRP, Frame Relay, IP Multicast, OSPF, RIP — dorreke @ 11:38

 If you want to skip the blah-blah, you may want to go directly to the table below.  You will then want to read the blah blah to see what it is about. 🙂

 I’m not sure how much sense I made in yesterday’s posting.  The point was that we had a Frame Relay multipoint interface with a bridge-group on it.  Spanning Tree assumes that all devices on a segment can see each other’s broadcasts.  But this is not so with FR-MP or physical.

It got me thinking.   Frame Relay multipoint and physical interfaces are great for CCIE scenario writers.  They have so many issues.  That is because they are NBMA.  Many many protocols assume a multi-access segment is a broadcast segment.  NBMA breaks this rule because as often as not there is no visibility from spoke to spoke.  Most of the literature says “Here is a technology X.  This is how you configure it.  BTW, there is a special requirement for NBMA networks, and you have to do this … “.

I want to look at it from a different angle.  I want to make a list the things that NBMA cannot do, and how you can get round it for each technology.  I want to be able to say “OK, we have a FR multipoint.  They have obviously specified that in order to mess something up.  Which technologies are likely to be impacted?

So here is the list:

Technology Gotcha Solution
RIP RIP will not normally advertise out an interface it received the route on.  Not a problem in Frame Relay because split-horizon is disabled by default, but is a problem in other NBMA technologies such as X.25. 1. no ip split-horizon on interface (by default in FR)
2. Static neighbor spoke to spoke (TTL=2)
EIGRP EIGRP will not advertise out an interface it received the route on 1. no ip split-horizon eigrp nn on interface
2. Static neighbor spoke to spoke (TTL=2)
OSPF network No LSAs seen spoke-to-spoke 1. ip ospf priority 0 on spokes to force DR on hub
2. ip ospf network point-to-multipoint throughout
OSPF NSSA If one spoke is an ASBR and talks OSPF to the hub, but another spoke does not talk OSPF but has a route to the the prefix, what can happen to the forward address.  Might be unreachable? Still thinking about this !
bridge-group No visibility spoke-to-spoke There will usually be some alternate route between the STP root and the spoke.  Contrive Spanning Tree to block all but one of the circuits on the multipoint and therefore use alternate route.
PIM dense-mode If multicast source is behind the hub, then a prune from one spoke will prune all of them.  If multicast source is on a spoke, then it will not be routed out the same interface to another spoke. Find an alternate route for all but one of the spokes, or even for all of them.  Usually, this can be done with a tunnel, or by leveraging some alternate path to the source.  Use static ip mroute
PIM sparse-mode If multicast source is on a spoke, then it will not be routed out the same interface to another spoke. ip pim nbma-mode.  Or treat in same way as dense mode above.
     

Can anyone suggest anything to add to the list?

These issues are not limited to Frame Relay multipoint.  I have seen NMC labs where they contrive to turn an Ethernet segment into a NBMA segment, typically by creating unicast peering between routers on the Ethernet, and therefore not allowing them full-mesh.

28 Feb 2008

NMC Lab 09.18 – Multicast

Filed under: IP Multicast — dorreke @ 09:12

Started to work on the multicast again yesterday evening, but got hopelessly confused.  So much so I decided not to publish the blog.  Looking at yesterday’s posting, that was confused as well.  I shall have to tackle it with a clearer head.  I must do some multicast reading first though.  Wendell Odom’s CCIE Cert Guide, and Jeff Doyle’s book, Vol 2.  I shall add a few notes here as I go:

Multicast on LAN

  • IN PIM-DM, the winner of the “assert” contention is responsible for trasmitting the stream onto the LAN.  Assert is won by who is closest to the source (but is it pre-emptive?), with lowest IP as tie-breaker.
  • IGMP v1 uses PIM DR to determine who puts the IGMP query on the LAN.  It is the highest ip pim dr-priority, else the highest PIM IP address on the LAN.
  • IGMP v2 uses assert winner to query the LAN.  (Lowest IP as tie-breaker, so tutt’altro che DR).

PIM-SM RP – Rendez-Vous Point

Everyone should know where the RP is, by one of:

PIM-SM Source registration

  • If a router has a source, it sends a PIM register to the RP.  (The PIM register also encapsulates the first packet.  So why do we often not see a response from the RP to that first packet?)
  • If the RP has no listeners for that group, it sends a uncast Register-Stop back to the source.  Source router supresses register messages for 60 seconds.  After 55 seconds it sends a Null register with no packet encapsulated.

 To investigate:

  1. What happens if you declare dense-mode at one end of a link and sparse-mode at the other?

——————- Unfinished ————————

27 Feb 2008

NMC Lab 09.18 – Multicast

Filed under: IP Multicast — dorreke @ 14:10

I set off with good intentions yesterday evening.  Having got full reachability in 2 hours and 20 minutes, I thought I would just knock off the BGP, the IPv6, and the multicast all in one evening.  It wasn’t to be.  I got stuck on the BGP (which I shall write about some other time) and I got stuck on the multicast (which I shall write about now because it is more interesting), and I didn’t even attempt the IPv6.

If anyone reads this posting, it will probably make much more sense if you have the NMC scenario in front of you.  So go out and buy it!  It is not my intention to substitute or plagiarise the excellent labs that NMC produce, but more to use them as a starting point for my learning experience.

As I could tell from the wording of the requirements, the main problem was how to get R7 FRS to respond to the multicast ping.  I knew what the problem was: the FRS has an RPF to the source that points to R4 on VLAN 40, but the multicast stream is coming from R3.  That, and the fact that the FRS does not seem to see the RP.  The FRS is logging something about an “invalid join for 229.17.17.17 from R4”.  (Note to self: Examine this message as see where it comes from, and why FRS thinks it is invalid.)

As usual, I almost got the solution, but not quite.  The first thing I did was to put a static mroute in R4 towards the RP (172.16.106.1) via the FR interface.  That was correct!  The RP turned up in the FRS immediately.  But still no multicast stream.

It was here I made my biggest mistake.  As usual, I did not read the scenario carefully enough.  It says “9.18.5. Do not introduce static mroutes on FRS or create any new interfaces on FRS or R4.  Only configure static mroutes on 1 router in this scenario.”  Instead, I read, “Only configure 1 static mroute in this scenario.”  Which is why I spent the rest of the evening learning the hard way.

The various things I tried were:

  1. Putting ip pim nbma-mode on the FR interfaces … on R1 and R4!  So near and yet so far!
  2. Changing the dr-priority of the three routers concerned: R3, R4 and FRS.
  3. ip mroute 172.16.0.0 255.255.0.0 172.16.10.1 on R4!  So near and yet so far!

By now, I have looked in the Answer Key and groaned.

The best way to look at this is as a starting point for a learning exercise.  These are things to be investigated:

  1. Start with initial conditions of PIM configured on all the routers and the RP sending out its stuff.  Then look at that logging message on FRS and work out where it comes from and why.  I suspect I shall be breaking out my laptop with Ethereal.
  2. In this condition, are the mpackets appearing on VLAN40 at all.  If so, then the FRS is not forwarding them to Lo107 because of the lack of RP.  So, if I join the R7-E0 into the group, would the FRS respond?
  3. Find out about dr-priority, what it means, why it exists, and work out whether it has any effect at all in this scenario at all.
  4. Configure the solution proposed in the Answer Key and the SHOWiT and examine it carefully.  In particular, look at the show ip mroute in R3 to see the nbma-mode in action.
  5. Now, just to be perverse, see if I can do it without any static mroutes at all.  It should be possible like this:
    1. Go to R4 OSPF and lower the AD of 172.16.106.1 to something under 90.  This of course will mean that R4 will start injecting that prefix into the EIGRP.  But that is OK.  We actually want the FRS to use R4 for this prefix, and R3 will ignore it because it has a better internal.
    2. Now, for the RPF on the source, 172.16.11.0/24.  I reckon that should already be on the FR interface, so no change required there.
    3. But R4 is not receiving any multicasts now.  Why not?  Because R3 and R2 are not forwarding them because of their split-horizon thing.  So we need ip pim nbma-mode.  On R2 or R3?  Probably one or the other, but not both.  What will happen if we do put it on both?  Will R4 get two copies of each mpacket?
    4. Alternatively, we could get R3 to prefer the EIGRP route to 172.16.11.0/24.  In that case, R3 should start receiving the stream from VLAN 40, and forwarding it on the FR.  As long as R4 is expecting it from the FR, that should work.

There are so many possible solutions, how come I didn’t hit on any of them?

I guess I’ll have to record Torchwood and watch it some other day.

17 Feb 2008

NMC Lab 7.17 Multicast

Filed under: IP Multicast — dorreke @ 19:14

So, to the multicast.  Accompanied by Gordon Giltrap’s excellent album “Fear of the Dark”.

Configured it all up according to the requirements, configuring static RP all over just to be quick-and-dirty.  (I shall play with BSR later.)  Pinged 227.7.7.7 from 172.16.31.1 on R1. Nothing.  Enabled debug ip mpacket on R3 and tried again.

Feb 17 16:27:38.106: IP(0): s=172.16.31.1 (FastEthernet0/0.10) d=227.7.7.7 id=2, ttl=254, 
 prot=1, len=114(100), RPF lookup failed for source or RP[OK]

So, an RPF failure. It is not expecting multicast packets form 172.16.31.1 to arrive on FastEthernet0/0.10.  So where is it expecting them to arrive?

R3#show ip route 172.16.31.1 
Routing entry for 172.16.31.0/24 
  Known via "connected", distance 0, metric 0 (connected, via interface) 
  Advertised by bgp 300 
  Routing Descriptor Blocks: 
  * directly connected, via FastEthernet0/0.10 
      Route metric is 0, traffic share count is 1

 Huh?  Scratches head.  RPF failure when the packet arrives on Fa0/0.10, but the route to the source is via Fa0/0.10.  What is going on?  Must be that strange secondary addressing that is upsetting it.  Tried reversing primary and secondary addresses on R3 – no change.  ???  Aagh, another misconf:

interface FastEthernet0/0.20 
 ip pim sparse-mode

No! VLAN 20 is not on Fa0/0.20, it’s on Fa0/1.  So R3 could not see the RP, even if it was configured statically. But what a cryptic error message!  It was the RP that was failing the RPF check, not the source.  Not only that, but I did it on R2 as well. Perhaps I would do well to keep a note of the interfaces I am using.  Or use the show ip int brief command more often.  Perhaps I should do an alias exec ship show ip interface brief | exclude unassigned on all routers.  That would be useful, and would help avoid such mistakes.

What it does illustrate is the advantage of distributing RP information dynamically, either with autoRP or with BSR.  You can see whether or not you are talking to the RP.  So, reconfigured everything to use BSR, and watched the RP turning up in each router in turn. Great!  Except R6.  What is the problem there?  Let us take the hint from the mistake we made before: let’s check the RPF back to the RP:

R6#show ip route 172.16.120.1 
Routing entry for 172.16.120.0/24 
  Known via "eigrp 30", distance 170, metric 40960 
  Tag 110, type external 
  Redistributing via eigrp 30 
  Last update from 172.16.46.4 on FastEthernet0/0.40, 01:21:12 ago 
  Routing Descriptor Blocks: 
  * 172.16.46.4, from 172.16.46.4, 01:21:12 ago, via FastEthernet0/0.40 
      Route metric is 40960, traffic share count is 1 
      Total delay is 600 microseconds, minimum bandwidth is 100000 Kbit 
      Reliability 255/255, minimum MTU 1500 bytes 
      Loading 1/255, Hops 1 
      Route tag 110

OK, it is not the way the multicast topology is drawn.  So, let’s try and fix it with a static mroute:

R6#conf t 
R6(config)#ip mroute 172.16.120.0 255.255.255.0 172.16.36.3 
R6(config)#^Z

Have we got the RP now?

 R6#show ip pim rp mapping 
PIM Group-to-RP Mappings Group(s) 227.7.7.7/32 
  RP 172.16.120.1 (?), v2 
    Info source: 172.16.120.1 (?), via bootstrap, priority 0, holdtime 194 
         Uptime: 00:01:13, expires: 00:02:55 
Group(s) 227.8.8.8/32 
  RP 172.16.120.1 (?), v2 
    Info source: 172.16.120.1 (?), via bootstrap, priority 0, holdtime 195 
         Uptime: 00:01:13, expires: 00:03:00

How are we doing at R1?

R1#ping 227.7.7.7 Type escape sequence to abort. 
Sending 1, 100-byte ICMP Echos to 227.7.7.7, timeout is 2 seconds: Reply to request 0 from 172.16.13.3, 5 ms 
Reply to request 0 from 172.16.234.4, 45 ms 
Reply to request 0 from 172.16.25.5, 29 ms 
Reply to request 0 from 172.16.36.6, 13 ms 
Reply to request 0 from 172.16.23.2, 9 ms 
R1#

OK, all but CAT2.  I had added a join-group on CAT2, but I don’t know whether that was required.  One sure way to find out is to look at the SHOWiT.  No, Lo120 doesn’t join the group.  Nevertheless, I wonder why I am not getting a ping off it.  It’s on the OIL for the group on CAT2.  Lets look at the RPF for the source 172.16.31.1:

CAT2#show ip route 172.16.31.1 
Routing entry for 172.16.31.0/24 
  Known via "ospf 7", distance 110, metric 20 
  Tag 120, type extern 2, forward metric 1 
  Last update from 172.16.23.2 on Vlan20, 01:41:14 ago 
  Routing Descriptor Blocks: 
  * 172.16.23.2, from 172.16.122.1, 01:41:14 ago, via Vlan20 
      Route metric is 20, traffic share count is 1

OK, it is expecting that source to come from 172.16.23.2, i.e. R2, but it is coming from R3.  How it knows it is coming from R3 instead of R2, I’m not really sure.  But let’s put in a static mroute and see if it works:

CAT2#conf t 
Enter configuration commands, one per line.  End with CNTL/Z. 
CAT2(config)#ip mroute 172.16.31.1 255.255.255.255 172.16.23.3 
CAT2(config)#^Z 
CAT2#

Try pinging from R1 again:

R1#ping 
Protocol [ip]: 
Target IP address: 227.7.7.7 
Repeat count [1]: 10 
Datagram size [100]: 
Timeout in seconds [2]: 
Extended commands [n]: y 
Interface [All]: FastEthernet0/0 
Time to live [255]: 
Source address: 
Type of service [0]: 
Set DF bit in IP header? [no]: 
Validate reply data? [no]: 
Data pattern [0xABCD]: 
Loose, Strict, Record, Timestamp, Verbose[none]: 
Sweep range of sizes [n]: 
Type escape sequence to abort. 
Sending 10, 100-byte ICMP Echos to 227.7.7.7, timeout is 2 seconds: Reply to request 0 from 172.16.13.3, 8 ms 
Reply to request 0 from 172.16.234.4, 48 ms 
Reply to request 0 from 172.16.25.5, 28 ms 
Reply to request 0 from 172.16.36.6, 12 ms 
Reply to request 0 from 172.16.23.2, 12 ms 
Reply to request 1 from 172.16.13.3, 4 ms 
Reply to request 1 from 172.16.234.4, 64 ms 
Reply to request 1 from 172.16.234.4, 48 ms 
Reply to request 1 from 172.16.25.5, 32 ms 
Reply to request 1 from 172.16.25.5, 24 ms 
Reply to request 1 from 172.16.36.6, 12 ms 
Reply to request 1 from 172.16.23.2, 8 ms 
Reply to request 1 from 172.16.23.20, 8 ms 
Reply to request 1 from 172.16.23.2, 4 ms 
Reply to request 2 from 172.16.13.3, 4 ms 
Reply to request 2 from 172.16.234.4, 40 ms 
Reply to request 2 from 172.16.25.5, 24 ms 
Reply to request 2 from 172.16.36.6, 8 ms 
Reply to request 2 from 172.16.23.2, 8 ms 
Reply to request 2 from 172.16.23.20, 8 ms 
Reply to request 3 from 172.16.13.3, 4 ms 
Reply to request 3 from 172.16.234.4, 45 ms 
Reply to request 3 from 172.16.25.5, 24 ms 
Reply to request 3 from 172.16.36.6, 12 ms 
Reply to request 3 from 172.16.23.2, 8 ms 
Reply to request 3 from 172.16.23.20, 8 ms 
: 
:

Not first time, but OK once the shortest-path tree cuts in.  That was hard work!

Now, I wonder why the SHOWiT has configured bidir on R3 and R4?

At the end of the day, what are the lessons to be learned?

  1. Don’t configure the wrong interface!  Check show ip int brief regularly to make sure you configure the right one.  Do a show run int after each interface is configured.
  2. Check the PIM neighbor relationships one at a time as you build up the tree.
  3. Check the RPF back to the RP from every destination router.  This can be done by pinging the group from the RP interface of the RP router itself.  Any router that does not reply is likely either to have missed the join-group, or to have an RPF issue with the RP.
  4. Check for  FR multipoint interfaces.  If it is a sparse-mode secenario, put ip pim nbma-mode on the multipoint interface.  If it is dense-mode, build some tunnels so that there is a different logical interface to each neighbor.
  5. Ping from the source as specified in the scenario.  Several times.  One the PIM has settled down (after the second ping), check every router responds, and no router responds more than once.

Blog at WordPress.com.