[Day #68 PyATS Series] Check end-to-end multicast routing across domains using pyATS for Cisco [Python for Network Engineer]
Table of Contents
Introduction — what you will accomplish and why it matters
Multicast is notoriously brittle in multi-domain environments. When sources and receivers sit in different routing or administrative domains (for example, two autonomous OSPF areas, or two administrative networks connected via BGP), problems happen: RPF failures, missing RP information, IGMP errors, or simply no traffic on the receiver. This lesson teaches you a deterministic validation workflow that targets the most common failure points and proves end-to-end multicast delivery.
You’ll learn:
- How to design a reproducible pyATS test that verifies multicast control plane (IGMP, PIM) and data plane (actual multicast packet flow).
- How to inject multicast traffic from a sender and validate the multicast routing table (
show ip mroute
) and forwarding interfaces across domain boundaries. - How to check RP reachability, MSDP/BSR behavior (if spanning multiple PIM domains), and IGMP membership at the access edge.
- How to assert pass/fail programmatically and produce concise test reports.
Assumptions & prerequisites
- Basic knowledge of multicast concepts: IGMP, PIM (sparse mode), RP (Rendezvous Point), (S,G) vs (*,G), RPF.
- A lab with a multicast source (Sender), multicast receiver (Receiver), routers in Domain A and Domain B, and an RP or MSDP linking domains (as necessary).
- pyATS and Genie installed on your controller workstation (Python 3.8+ recommended).
- SSH reachability and credentials for all devices defined in testbed.yml.
- (Optional) A network management GUI like Cisco Prime or DNA Center for GUI validation steps.
Topology Overview
Below is the logical topology we reference throughout the workshop. You can reproduce this in VIRL/CML/physical lab:

Devices and roles
- Sender (Host S) — 10.1.1.2, in Domain A. Generates UDP packets to group
239.10.10.10
. - Receiver (Host R) — 10.2.1.10, in Domain B. Performs IGMP join for group.
- R1, R2 — Domain A routers (access and aggregation). R1 is access router that sees IGMP joins.
- Core AB / Border — Routers between domains. Responsible for inter-domain routing; must have PIM adjacency or RP reachability.
- R4, R5 — Domain B routers (core and access).
- RP — Rendezvous Point (either static or dynamic via BSR/AutoRP). In multi-domain deployments you may use MSDP between RPs in each domain.
Key validation goals
- IGMP membership appears on the access router in Domain B.
- PIM neighbors between routers in and across domains are present.
- RP is reachable (routing to RP).
- Multicast routing table contains the expected
(*,G)
and(S,G)
entries in the correct routers and the correct outgoing interface list (OIL) forwarding to the receiver. - Multicast packets injected by the Sender are forwarded to the Receiver (data-plane verification).
Topology & Communications (IP addressing, protocols, config notes)
Sample IP addressing (for lab/testbed.yml)
- Sender
S
:10.1.1.2/24
(connected to R1 Gi0/1) - R1 Gi0/1:
10.1.1.1/24
- R1 — R2 Core link:
10.0.12.0/30
- Border / Core AB links:
10.0.23.0/30
,10.0.34.0/30
etc. - Receiver
R
:10.2.1.10/24
(connected to R5 Gi0/1) - R5 Gi0/1:
10.2.1.1/24
- RP:
10.0.0.254/32
(or loopback reachable within both domains)
Protocol choices / configuration guidance
- IGMP version: Use IGMPv2 or IGMPv3 depending on source-specific needs. IGMPv2 is adequate for many tests, IGMPv3 for SSM.
- PIM Mode: Use PIM Sparse Mode (PIM-SM) in multi-domain setups. Ensure PIM is enabled on all multicast-facing interfaces.
- RP configuration: Choose one of:
- Static RP (configured on all routers).
- BSR (Bootstrap Router) if BSR is enabled.
- Auto-RP where supported.
- Inter-domain RP discovery: If RPs are per domain, use MSDP between RPs to exchange source info — or ensure static RP is reachable from both domains.
Lab validation checklist (manual)
show ip interface brief
— check interface IPs.show ip route <rp-ip>
— ensure route to RP exists.show ip pim neighbor
— PIM adjacencies.show ip igmp groups
— receiver membership on access router.show ip mroute
— multicast routing table entries.
Workflow Script — pyATS automated validation (full script)
Below is a ready-to-run pyATS script you can adapt. It:
- Loads the
testbed.yml
. - Connects to devices.
- Performs pre-checks (PIM neighbors, route to RP).
- Triggers multicast traffic from the Sender (ping to multicast group).
- Collects multicast state from relevant routers.
- Asserts expected conditions (IGMP membership, mroute entries, OIL includes receiver interface).
- Produces a pass/fail summary.
Save as multicast_validate.py
.
#!/usr/bin/env python3 """ multicast_validate.py pyATS script to validate end-to-end multicast routing across domains. Usage: python multicast_validate.py testbed.yml """ import sys import time import re from pyats.topology import loader # === CONFIGURATION === TESTBED_FILE = sys.argv[1] if len(sys.argv) > 1 else "testbed.yml" SENDER = "Sender" # device name in testbed.yml for the sender host (or router interface) RECEIVER_ACCESS_ROUTER = "R5" # access router next to receiver (where IGMP joins should appear) RECEIVER_IF = "GigabitEthernet0/1" # outgoing interface towards receiver on R5 MULTICAST_GROUP = "239.10.10.10" SENDER_SRC_IP = "10.1.1.2" # source IP of the multicast sender RP_IP = "10.0.0.254" PIM_NEIGHBOR_EXPECTED = ["R2","R4"] # example routers where PIM neighbor should be present # === HELPER FUNCTIONS === def connect_all(testbed): for dev in testbed.devices.values(): try: print(f"[CONNECT] Connecting to {dev.name}...") dev.connect(log_stdout=False) except Exception as e: print(f"[ERROR] Could not connect to {dev.name}: {e}") def exec_on(dev, cmd): try: print(f"[EXEC] {dev.name} -> {cmd}") out = dev.execute(cmd) return out except Exception as e: print(f"[ERROR] execution failed on {dev.name} for {cmd}: {e}") return "" # === MAIN FLOW === def main(): print("[INFO] Loading testbed:", TESTBED_FILE) testbed = loader.load(TESTBED_FILE) # Connect connect_all(testbed) # Pre-check: route to RP from core routers core_devices = ["R2", "R3", "R4"] route_ok = True for name in core_devices: if name not in testbed.devices: continue dev = testbed.devices[name] out = exec_on(dev, f"show ip route {RP_IP}") if "via" not in out and RP_IP not in out: print(f"[WARN] {name} has no route to RP {RP_IP}") route_ok = False else: print(f"[OK] {name} can reach RP {RP_IP}") # Pre-check: PIM neighbors pim_ok = True for name in PIM_NEIGHBOR_EXPECTED: if name not in testbed.devices: continue dev = testbed.devices[name] out = exec_on(dev, "show ip pim neighbor") if "Neighbor" not in out and not re.search(r"\d+\.\d+\.\d+\.\d+", out): print(f"[WARN] No PIM neighbors on {name}") pim_ok = False else: print(f"[OK] PIM neighbor table for {name} looks populated") if not (route_ok and pim_ok): print("[ERROR] Pre-checks failed. Aborting further validation.") return 1 # Trigger traffic from SENDER if SENDER in testbed.devices: dev_s = testbed.devices[SENDER] # ping multicast group from source IP (repeat 5 packets) cmd_ping = f"ping {MULTICAST_GROUP} source {SENDER_SRC_IP} repeat 5" out_ping = exec_on(dev_s, cmd_ping) print("[INFO] Sender ping output (first 200 chars):") print(out_ping[:200]) else: print(f"[WARN] Sender device {SENDER} not in testbed; cannot inject traffic from device.") # Give routers a moment to build multicast state time.sleep(5) # Collect multicast state from R2, R3, R4, R5 collection = {} for name in ["R2","R3","R4","R5"]: if name not in testbed.devices: continue dev = testbed.devices[name] out_mroute = exec_on(dev, f"show ip mroute {MULTICAST_GROUP}") out_igmp = exec_on(dev, "show ip igmp groups") out_pim = exec_on(dev, "show ip pim neighbor") collection[name] = { "mroute": out_mroute, "igmp": out_igmp, "pim": out_pim } # Validate expected state: receiver interface in OIL on R5 success = True r5_mroute = collection.get("R5", {}).get("mroute","") if RECEIVER_IF in r5_mroute or "forwarding" in r5_mroute.lower(): print(f"[PASS] Receiver interface {RECEIVER_IF} is present in R5 multicast OIL.") else: print(f"[FAIL] Receiver interface {RECEIVER_IF} NOT seen in R5 'show ip mroute' output.") success = False # Validate (S,G) entry exists on core routers for core in ["R2","R3","R4"]: out = collection.get(core, {}).get("mroute","") if re.search(r"\("+re.escape(SENDER_SRC_IP)+r",\s*"+re.escape(MULTICAST_GROUP)+r"\)", out): print(f"[PASS] (S,G) for {SENDER_SRC_IP},{MULTICAST_GROUP} present on {core}.") elif re.search(r"\(\*,\s*"+re.escape(MULTICAST_GROUP)+r"\)", out): print(f"[INFO] (*,G) entry present on {core}; (S,G) may be learned later.") else: print(f"[FAIL] No multicast entries for group {MULTICAST_GROUP} on {core}.") success = False # Summary if success: print("[RESULT] MULTICAST VALIDATION: PASS (end-to-end forwarding appears correct)") return 0 else: print("[RESULT] MULTICAST VALIDATION: FAIL (investigate outputs captured)") # optionally write collection to files for debugging return 2 if __name__ == "__main__": sys.exit(main())
Notes: adapt device names, credentials, and interface names to your
testbed.yml
. The script intentionally keeps parsing minimal—replace regex checks with Genie parsers (device.parse("show ip mroute ...")
) for production-grade parsing.
Explanation by Line
I’ll walk through the important sections of multicast_validate.py
so you understand why each step exists and how to adapt/extensibly improve it.
- Header & usage
TESTBED_FILE = sys.argv[1] if len(sys.argv) > 1 else "testbed.yml"
Acceptstestbed.yml
as CLI parameter. Keeps the script flexible. - Constants
Variables likeSENDER
,RECEIVER_ACCESS_ROUTER
,MULTICAST_GROUP
, andRP_IP
are the lab variables. Changing these is how you adapt the check to different groups and labs. - connect_all(testbed)
Iteratestestbed.devices
and callsdev.connect()
. It’s essential to catch exceptions (devices down, credentials wrong).log_stdout=False
prevents noisy logs; set toTrue
for debug. - exec_on(dev, cmd)
Thin wrapper to standardize command execution and centralize error handling. All device commands go through this function — makes it easy to add logging, timestamps, or command retries later. - Pre-checks (route to RP and PIM neighbors)
- Ensure route to RP exists: multicast control plane depends on the RP being reachable; otherwise, RPs may never receive register messages.
- Check
show ip pim neighbor
on routers that should have PIM adjacencies. If PIM neighbors are missing, multicast routing will fail.
- Trigger multicast traffic
cmd_ping = f"ping {MULTICAST_GROUP} source {SENDER_SRC_IP} repeat 5" out_ping = exec_on(dev_s, cmd_ping)
The script uses a ping to the multicast group from the sender’s source IP. On Cisco IOS devices,ping <group> source <src_ip> repeat N
is commonly supported and generates multicast traffic that PIM and routers should forward. If your sender is a Linux host, usenping
oriperf -u -c <group> -T
with multicast enable. - Collect state
The script collectsshow ip mroute
,show ip igmp groups
, andshow ip pim neighbor
from key routers. This gives you both control plane and data plane indicators. - Validations & assertions
- Check that the receiver interface appears in the OIL for the group on the access router (R5). If the receiver interface doesn’t appear, IGMP join or PIM forwarding failed.
- Check for
(S,G)
entries on core routers: presence of(S,G)
indicates the source is known and the router has RPF info. - If only
(*,G)
is present, initial joins may have created shared tree entries and(S,G)
may be learned later (depending on RP and data flows).
- Reporting
The script prints[PASS]
/[FAIL]
style messages. For automation pipelines, return codes are used (0 success, non-zero failure) so CI/CD systems or orchestration engines can react to multicast validation results.
How to harden this script for production
- Use Genie parsing:
dev.parse("show ip mroute")
returns structured output to remove brittle regex checks. - Add retries and exponential backoff when waiting for
(*,G)
→(S,G)
transitions. - Capture outputs to persistent logs or test reports (e.g., pytest/junit).
- Add traffic generator integration (iperf, TRex) for better data-plane validation and packet counts.
testbed.yml
Example
Below is a minimal, realistic testbed.yml
for the pyATS script above. Fill in actual IPs/credentials.
testbed: name: multicast_lab credentials: default: username: lab password: lab123 devices: R1: os: ios type: router connections: cli: protocol: ssh ip: 10.0.12.1 R2: os: ios type: router connections: cli: protocol: ssh ip: 10.0.12.2 R3: os: ios type: router connections: cli: protocol: ssh ip: 10.0.23.2 R4: os: ios type: router connections: cli: protocol: ssh ip: 10.0.34.2 R5: os: ios type: router connections: cli: protocol: ssh ip: 10.0.34.3 Sender: os: linux type: host connections: cli: protocol: ssh ip: 10.1.1.2 Receiver: os: linux type: host connections: cli: protocol: ssh ip: 10.2.1.10
Notes
- Use real management IPs reachable from the machine running pyATS.
- If you are using IOS XE, NX-OS or other platforms, set
os:
accordingly (iosxe
,nxos
) and ensure pyATS supports those platforms. - For Linux hosts you may use
ip
commands (e.g.,ip maddr show
,iperf
) to produce IGMP joins and traffic.
Post-validation CLI
Below are realistic sample outputs you should expect after a successful test. Use them to compare against your lab prints.
A. show ip igmp groups
on the access router (R5)
R5# show ip igmp groups IP IGMP Groups Group Address Interfaces 239.10.10.10 GigabitEthernet0/1 Uptime: 00:02:12 Last reporter: 10.2.1.10 Sources: (none) Flags: IGMPv2
Interpretation: Receiver 10.2.1.10 has joined 239.10.10.10 and the access interface Gi0/1 is the downstream OIF.
B. show ip pim neighbor
on core router (R2)
R2# show ip pim neighbor PIM Neighbor Table Interface Address Uptime Version State Gi0/0 10.0.12.1 00:15:23 v2 connected Gi0/2 10.0.23.2 00:10:11 v2 connected
Interpretation: PIM adjacencies look healthy.
C. show ip mroute 239.10.10.10
on a core router (R3)
R3# show ip mroute 239.10.10.10 IP Multicast Routing Table Flags: S - Sparse, F - Forwarding, J - Join (10.1.1.2, 239.10.10.10) Uptime: 00:00:12, Flags: S Incoming interface: GigabitEthernet0/0, RPF nbr 10.0.12.1 Outgoing interface list: GigabitEthernet0/1 (forwarding), GigabitEthernet0/2 (forwarding) Packet count: 320 Bytes: 20480 (*,239.10.10.10) Uptime: 00:01:00, Flags: S Incoming interface: Null Outgoing interface list: GigabitEthernet0/2 (forwarding)
Interpretation: (S,G)
entry present showing RPF neighbor and list of outgoing interfaces. Packet count indicates data plane is flowing.
D. show ip route 10.1.1.2
(verify reachability to source / RPF)
R3# show ip route 10.1.1.2 Routing entry for 10.1.1.0/24 Known via "ospf 1", distance 110, metric 20, type intra-area Last update 00:00:32, from 10.0.12.1 Routing Descriptor Blocks: * 10.0.12.1, via GigabitEthernet0/0 Route metric is 20, traffic share count 1
Interpretation: R3 has route to source network via expected RPF interface.
E. (Optional) show ip mroute count
— packet statistics (if supported)
R3# show ip mroute | include Packet count Packet count: 320 Bytes: 20480
Interpretation: you see nonzero packet counts on forwarding routers proving data plane.
FAQs
Q1: Why do I see (*,G)
on some routers but not (S,G)
right away?
A: When receivers initially join via IGMP, routers may create a (*,G)
shared-tree entry pointing to the RP. The (S,G)
specific entry is created when the RP learns of source traffic (via register messages) or when the network switches to shortest-path tree (SPT). Allow a few seconds after traffic injection for (S,G)
to appear.
Q2: Why is my Receiver not getting traffic even though show ip mroute
shows forwarding interfaces?
A: Possible causes: ACLs blocking multicast, incorrect IGMP version mismatch, or the receiver’s host firewall dropping multicast. Validate host-level multicast membership (ip maddr
on Linux), and confirm that the access interface is not administratively down. Also check MTU and TTL (multicast pings may fail if TTL set low).
Q3: How do I validate RP reachability?
A: show ip route <rp-ip>
and ping <rp-ip>
from core routers; ensure RP is in routing table. Also check show ip pim rp
(or vendor variant) to see RP mapping and registrations.
Q4: What if domains use different RPs?
A: Use MSDP between RPs to exchange source active messages, or configure a shared/peered RP reachable from both domains. Another approach is using BSR and having both domains accept the same RP information via routing.
Q5: How to test multicast data plane without a sender host?
A: Use ping <group> source <src-ip>
from a router interface configured with the source IP. Alternatively use a traffic generator (iperf, nping) on a Linux VM.
Q6: Can pyATS validate packet counts to assert QoS or packet loss?
A: Yes — collect show ip mroute
outputs which include packet counts, or integrate with packet-capture tools (tcpdump, Wireshark) on receiver to check received packets. For rigorous measurement, use a traffic generator that reports packet loss.
Q7: What are common failures crossing domains?
A: RPF failures (source not reachable via expected interface), mismatched PIM modes, lack of MSDP or RP peering, routing asymmetry, or route filtering of RP/source networks.
Q8: How to extend this test for IPv6 multicast?
A: Use MLD (Multicast Listener Discovery) and PIMv6. The same principles apply, but CLI commands differ (show ipv6 mroute
, show ipv6 pim neighbor
, etc.). Update the pyATS commands accordingly.
YouTube Link
Watch the Complete Python for Network Engineer: Check end-to-end multicast routing across domains Using pyATS for Cisco [Python for Network Engineer] Lab Demo & Explanation on our channel:
Join our training
If this Article helped you realize how brittle multicast can be in multi-domain deployments, imagine having a repeatable 3-month instructor-led program that teaches you Python, Ansible, API automation, and Cisco DevNet-style workflows — everything you need to automate large-scale network testing like the multicast lab above.
Trainer Sagar Dhawan runs a hands-on 3-month instructor-led program where we cover:
- Python for Network Engineer fundamentals (pyATS, Genie, requests)
- Automating device configuration and verification (Ansible, Netmiko, NAPALM)
- Building test harnesses and CI pipelines for network validation
- Lab workshops: multicast, BGP, EVPN, automation of day-2 operations
- Building dashboards and actionable reports from test outputs
This course is designed for network engineers who want to move from manual CLI checks to reproducible, automated validation. The course syllabus, schedule and registration are at:
https://course.networkjourney.com/python-ansible-api-cisco-devnet-for-network-engineers/
Enroll Now & Future‑Proof Your Career
Email: info@networkjourney.com
WhatsApp / Call: +91 97395 21088