[Day #60 PyATS Series] Building Reusable Validation Libraries (Multi-Vendor) Using pyATS for Cisco [Python for Network Engineer]
Table of Contents
Introduction — key points
As a network engineer moving into automation, one of the most valuable skills is creating reusable validation libraries: small, well-tested units (functions/classes) that encapsulate checks like “BGP neighbor up”, “interface up”, “ACL contains permit for X”, or “ospf adjacency count”. Reuse reduces duplication, improves quality, and lets you evolve tests (fix once — benefit everywhere).
Today I’ll show you how to:
- Design a vendor-agnostic library API that your team can import into pyATS jobs.
- Implement concrete validators for Cisco, Arista, Palo Alto and FortiGate using a consistent interface.
- Integrate the library with pyATS
aetest
testcases (so tests produce pyATS reports) and with plain Python CLI runners. - Unit-test validators using sample CLI outputs (so students can run tests offline).
- Package, version and run validators in CI, and push results to a GUI (Kibana/Flask) for NOC consumption.
Topology Overview
We assume a small multi-vendor lab where the validation library will be exercised:

- Automation host runs pyATS and the reusable library.
- Devices reachable via SSH/credentials defined in
testbed.yml
. - GUI optional: the library writes JSON results that a simple Flask app or ELK can consume.
Topology & Communications
What we will validate (examples):
- Interface health:
up/up
, error counters under threshold. - Routing: OSPF adjacencies full, BGP neighbors up, specific prefixes present.
- ACLs / policies: rule present and ordering correct.
- CPU/memory thresholds.
- Custom vendor checks (e.g., Palo Alto security rule hit counts, FortiGate firewall sessions).
How validators communicate:
- All validators accept parsed data (preferable) or raw CLI as fallback.
- For parsed data we use Genie (
device.parse('show ip bgp summary')
) when available. If Genie parser not present for the command, validators accept raw text and include a small raw parser. - Library API is intentionally device-agnostic: functions accept
device_name
,platform
anddata
(raw or parsed). This lets the same function be called from a pyATS testcase, a CLI runner, or CI.
Why this pattern?
- Keeps check logic decoupled from collection logic (separation of concerns).
- Allows offline unit tests by feeding sample outputs into validators.
- Enables consistent reporting: every check returns a result object
{status: PASS/FAIL, details: ..., evidence: ...}
.
Workflow Script — library layout & runnable examples
Recommended repository layout
pyats-validations/ ├─ validations/ │ ├─ __init__.py │ ├─ base.py # common result object + utilities │ ├─ interface.py # interface validators │ ├─ routing.py # BGP/OSPF validators │ ├─ acl.py # ACL & policy validators │ ├─ vendor/ │ │ ├─ cisco.py │ │ ├─ arista.py │ │ ├─ paloalto.py │ │ └─ fortigate.py ├─ tests/ │ ├─ samples/ │ │ ├─ ios_show_ip_int_brief.txt │ │ ├─ ios_bgp_summary.txt │ │ └─ ... │ ├─ test_interface.py │ └─ test_routing.py ├─ run_validation.py # CLI runner (collect + validate) ├─ pyats_job.py # aetest project using library ├─ testbed.yml ├─ setup.py / pyproject.toml └─ README.md
Core result object (validations/base.py
)
# validations/base.py from dataclasses import dataclass, asdict from typing import Any, Dict, Optional import time, json @dataclass class CheckResult: name: str status: str # "PASS" or "FAIL" details: str evidence: Optional[Dict[str, Any]] = None timestamp: float = time.time() def to_dict(self): d = asdict(self) d['timestamp'] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(self.timestamp)) return d def save_result_to_file(result: CheckResult, path: str): with open(path, 'w') as f: json.dump(result.to_dict(), f, indent=2)
Interface validator (validations/interface.py
)
# validations/interface.py from .base import CheckResult import re def validate_interface_up(device_name: str, interface_data, interface_name: str): """ interface_data: parsed dict or raw text (we support both) interface_name: e.g. 'GigabitEthernet0/1' """ # Try structured first: interface_data is expected as {intf: {status: 'up', protocol: 'up'}} if isinstance(interface_data, dict): info = interface_data.get(interface_name) if not info: return CheckResult( name=f"{device_name}:interface:{interface_name}:exists", status="FAIL", details=f"Interface {interface_name} not found", evidence={'interface_data': interface_data} ) up = info.get('status') == 'up' and info.get('protocol') == 'up' return CheckResult( name=f"{device_name}:interface:{interface_name}:up", status="PASS" if up else "FAIL", details="Interface is up/up" if up else f"Interface not up: status={info.get('status')} proto={info.get('protocol')}", evidence={'interface': info} ) # Raw text fallback - simple regex m = re.search(rf'^{re.escape(interface_name)}\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)', interface_data, re.M) if m: status = m.group(4) protocol = m.group(5) up = status.lower() == 'up' and protocol.lower() == 'up' return CheckResult( name=f"{device_name}:interface:{interface_name}:up", status="PASS" if up else "FAIL", details="Interface is up/up" if up else f"Interface not up: status={status} proto={protocol}", evidence={'line': m.group(0)} ) return CheckResult( name=f"{device_name}:interface:{interface_name}:parse_error", status="FAIL", details="Unable to parse interface data", evidence={'raw': interface_data[:1000] if isinstance(interface_data, str) else interface_data} )
Routing validator (BGP neighbor up) (validations/routing.py
)
# validations/routing.py from .base import CheckResult def validate_bgp_neighbor(device_name: str, bgp_parsed, neighbor_ip: str): """ bgp_parsed: Genie parsed dict if available, else raw text. """ # if parsed if isinstance(bgp_parsed, dict): # Genie typically exposes: bgp['instance'][...]['vrf'][...]['neighbor'][<ip>]['session_state'] # We will try a couple of common access paths (defensive) try: for inst in bgp_parsed.get('instance', {}).values(): for vrf in inst.get('vrf', {}).values(): neighs = vrf.get('neighbor', {}) if neighbor_ip in neighs: state = neighs[neighbor_ip].get('session_state') or neighs[neighbor_ip].get('state') status = 'PASS' if state and 'up' in state.lower() else 'FAIL' return CheckResult( name=f"{device_name}:bgp:neighbor:{neighbor_ip}", status=status, details=f"neighbor state={state}", evidence={'neighbor': neighs[neighbor_ip]} ) except Exception: pass # raw fallback if isinstance(bgp_parsed, str): if neighbor_ip in bgp_parsed and ('Established' in bgp_parsed or 'Up' in bgp_parsed): return CheckResult(name=f"{device_name}:bgp:neighbor:{neighbor_ip}", status="PASS", details="neighbor appears up (raw match)", evidence={'snippet': bgp_parsed[:500]}) else: return CheckResult(name=f"{device_name}:bgp:neighbor:{neighbor_ip}", status="FAIL", details="neighbor not up (raw)", evidence={'raw': bgp_parsed[:500]}) return CheckResult(name=f"{device_name}:bgp:neighbor:{neighbor_ip}", status="FAIL", details="no data", evidence=None)
Vendor helpers (example validations/vendor/cisco.py
)
# validations/vendor/cisco.py def extract_interfaces_from_show_ip_int_brief(raw_text): """ Return dict: { 'GigabitEthernet0/1': {'ip': '10.0.0.1', 'status': 'up', 'protocol': 'up'} } """ res = {} for line in raw_text.splitlines(): parts = line.split() # crude parse: Interface, IP-Address, OK?, Method, Status, Protocol if len(parts) >= 6 and parts[0].startswith('Gig') or parts[0].startswith('Fast') or parts[0].startswith('Loopback'): intf = parts[0] ip = parts[1] status = parts[-2] proto = parts[-1] res[intf] = {'ip': ip, 'status': status, 'protocol': proto} return res
The examples above are intentionally small and readable. In production prefer Genie parses where available and extend vendor modules to robustly parse outputs.
Explanation by Line — deep walk-through of design & decisions
I’ll explain why I structured the library this way and how each piece helps students:
1. Separation: collectors vs validators
- Collectors (pyATS jobs,
run_validation.py
) are responsible for connecting to equipment and saving raw and/or parsed data. They do not contain heavy validation logic. This keeps the network I/O code minimal and replaceable. - Validators are pure functions that operate on data passed in. This makes them easy to unit test (feed sample output strings into them) and reuse outside pyATS.
2. Result object CheckResult
- Every validator returns a
CheckResult
dataclass. Advantages:- Uniform reporting: callers (pyATS testcase, CLI runner, CI) can rely on consistent fields.
- Easy to serialize to JSON and to push to GUIs or ES.
- Time-stamped and contains
evidence
so operators can quickly triage.
3. Vendor modules
- Wrapping vendor parsing & small transformations in
validations/vendor/*
keeps the core validators vendor-agnostic. For example:validate_interface_up()
receives a dict mapping interface → status. The vendor module is responsible for producing such dicts from raw CLI.- If you switch from NX-OS to EOS, only the vendor module needs updates.
4. Genie parsing where possible
device.parse(cmd)
provides structured output. Validators first try to consume structured parsed data (bgp_parsed
dict) and fallback to raw text parsing only when parsed is not available. This improves accuracy and reduces regex fragility.
5. Unit testing
- Because validators accept raw data or parsed dicts, unit tests can simply load files from
tests/samples/
and assert expectedCheckResult.status
without needing a lab or devices. This is a huge productivity win for your students.
6. pyATS integration
- Use these validators inside
aetest
testcases to produce pass/fail tests recorded by pyATS reports. Examplepyats_job.py
(abridged):
# pyats_job.py from genie.testbed import load from pyats import aetest from validations.interface import validate_interface_up from validations.vendor.cisco import extract_interfaces_from_show_ip_int_brief class InterfaceTests(aetest.Testcase): @aetest.setup def setup(self, testbed): self.tb = load('testbed.yml') @aetest.test def check_interfaces(self): for name, device in self.tb.devices.items(): device.connect() raw = device.execute('show ip interface brief') parsed = extract_interfaces_from_show_ip_int_brief(raw) # vendor helper res = validate_interface_up(name, parsed, 'GigabitEthernet0/0') assert res.status == "PASS", res.details device.disconnect()
aetest
will record each check in its report; failures are surfaced in pyATS reports.
testbed.yml
Example
Use a testbed that includes credentials and devices. Don’t commit real creds.
testbed: name: reusable_validation_lab credentials: default: username: netops password: NetOps!23 devices: R1: os: iosxe type: router connections: cli: protocol: ssh ip: 10.0.100.11 A1: os: eos type: switch connections: cli: protocol: ssh ip: 10.0.100.21 PA1: os: panos type: firewall connections: cli: protocol: ssh ip: 10.0.100.31 FG1: os: fortios type: firewall connections: cli: protocol: ssh ip: 10.0.100.41
Tip: Put device role/site metadata under custom:
and use it in reports.
Post-validation CLI (Real expected output)
Below are realistic sample outputs and the expected validator outputs. Use as screenshots in your blog:
A. Raw show ip interface brief
(Cisco IOS-XE)
R1# show ip interface brief Interface IP-Address OK? Method Status Protocol GigabitEthernet0/0 10.0.0.1 YES manual up up GigabitEthernet0/1 10.0.1.1 YES manual down down Loopback0 1.1.1.1 YES manual up up
B. Validator run (CLI runner run_validation.py
) — example output:
[INFO] Running validation: R1 interface GigabitEthernet0/0 up [PASS] R1:interface:GigabitEthernet0/0:up - Interface is up/up [INFO] Running validation: A1 interface Ethernet1 up [FAIL] A1:interface:Ethernet1:up - Interface not up: status=administratively down proto=down Saved result to results/run001/R1/interface_GigabitEthernet0_0.json Saved result to results/run001/A1/interface_Ethernet1.json
Result JSON (excerpt):
{ "name": "R1:interface:GigabitEthernet0/0:up", "status": "PASS", "details": "Interface is up/up", "evidence": {"interface": {"ip": "10.0.0.1", "status": "up", "protocol": "up"}}, "timestamp": "2025-08-28T12:00:00Z" }
C. show ip bgp summary
(Cisco) sample:
R1# show ip bgp summary BGP router identifier 1.1.1.1, local AS number 65000 Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 192.0.2.2 4 65100 12345 12340 3456 0 0 2d03h 120
Validator result (BGP neighbor check):
[PASS] R1:bgp:neighbor:192.0.2.2 - neighbor state=Established
Appendix — Packaging, CI, and GUI integration (practical guide)
Packaging & distribution
- Add
pyproject.toml
/setup.cfg
and publish to private PyPI or usepip install -e .
during deployments. - Tag releases and follow semantic versioning; include CHANGELOG.
Unit testing with pytest
- Put sample outputs in
tests/samples/
. - Tests should cover:
- Positive and negative cases.
- Parser edge cases (missing fields, truncated outputs).
- Schema validation (CheckResult fields).
Example tests/test_interface.py
:
from validations.interface import validate_interface_up from validations.vendor.cisco import extract_interfaces_from_show_ip_int_brief def test_interface_up_sample(): raw = open('tests/samples/ios_show_ip_int_brief.txt').read() parsed = extract_interfaces_from_show_ip_int_brief(raw) res = validate_interface_up('R1', parsed, 'GigabitEthernet0/0') assert res.status == 'PASS'
CI pipeline (example GitHub Actions)
- On PR: run
flake8
,pytest
, andmypy
(if typing used). - On merge: build package and publish to internal PyPI.
- Optional: trigger a smoke validation run against a staging lab.
GUI integration (quick options)
- Flask: create a simple endpoint that loads
results/<run_id>/*.json
and presents a web UI for operators. - ELK: index each
CheckResult
as a document into Elasticsearch and create Kibana dashboards (pass/fail trends, top failing checks, per-device heatmap). - Prometheus/Grafana: export numeric metrics (e.g., 1 for PASS, 0 for FAIL) and visualize time-series.
FAQs
1. Why build a reusable library instead of writing ad-hoc checks in each job?
Answer: Reusability brings consistency, reduces bugs, and centralizes fixes. When a parsing improvement is needed (e.g., for a platform change), fix the library once and all jobs benefit. Unit tests ensure you don’t regress behavior.
2. How do I handle vendor differences in a single validator function?
Answer: Keep validators vendor-agnostic at the API boundary (they accept parsed dicts). Implement vendor parsers/adapters that translate vendor raw CLI into the agreed intermediate structure. This isolates vendor quirks to a small module.
3. Should I always use Genie device.parse()
?
Answer: Use device.parse()
when the appropriate parser exists. It returns rich structured data. However, not every command or vendor has a parser. Implement robust raw-parsing fallbacks and unit tests using sample outputs.
4. How do I test validators without hardware?
Answer: Save representative CLI outputs in tests/samples/
and write pytest
unit tests that load these files and call validators directly. This enables CI to run validation tests on every PR.
5. How does this integrate with pyATS reporting?
Answer: Use CheckResult
objects inside aetest
testcases, assert on res.status == "PASS"
and let pyATS capture failures. You can also serialize CheckResult
to JSON for custom dashboards.
6. How to scale validators to hundreds of devices?
Answer: Separate collection (concurrent collectors) and validation (parallelizable). Collect raw outputs and store in object storage (S3), then run validation jobs on a worker fleet. Avoid running parse/validate synchronously on the same process for large fleets.
7. What about secrets and credentials management?
Answer: Never commit credentials to testbed.yml
. Use environment variables, Vault, or pyATS credential resolution. For CI pipelines, inject secrets via protected variables in Jenkins/GitHub Actions.
8. Can validators produce remediation actions?
Answer: Yes — validators can include a recommended_action
field in evidence
. But automation that writes configs should be gated and require multi-stage approvals. Validators are best used for detection; remediation should be a separate, auditable workflow.
YouTube Link
Watch the Complete Python for Network Engineer: Building reusable validation libraries (multi-vendor) Using pyATS for Cisco [Python for Network Engineer] Lab Demo & Explanation on our channel:
Join Our Training
If you want hands-on instructor-led guidance to build, test, package, and deploy reusable validation libraries at scale — with real lab exercises, code reviews, and CI pipelines — Trainer Sagar Dhawan’s 3-month instructor-led program covers Python, Ansible, API integrations and Cisco DevNet specifically tailored for network engineers. We translate these masterclass patterns into production projects you can ship to your team.
Learn more and enroll:
https://course.networkjourney.com/python-ansible-api-cisco-devnet-for-network-engineers/
Take your next step to becoming a confident Python for Network Engineer — build automation that your ops team trusts.
Enroll Now & Future‑Proof Your Career
Email: info@networkjourney.com
WhatsApp / Call: +91 97395 21088