Back to Blog
★★Intermediate Automation & Monitoring
NornirPythonNetwork AutomationNetmikoNAPALMScrapli

Nornir: Python-Native Network Automation Without the Overhead

March 13, 2026·20 min read

Overview

Ansible is the default answer when someone says "network automation." It works — but it makes you write everything in YAML, and YAML stops scaling the moment you need conditional logic, dynamic result processing, or anything more complex than "push this config to these hosts." Nornir is the alternative: a Python library that provides everything Ansible gives you for networking (inventory, parallel execution, result aggregation) while keeping you in Python the entire time.

Nornir vs Ansible

CapabilityNornirAnsible
LanguagePure Python — logic in codeYAML playbooks — logic via conditionals/modules
Parallel executionThreadedRunner (built-in), configurable workersforks parameter — process-based
InventoryYAML files or custom plugin (Netbox, CSV, SQL)INI, YAML, dynamic inventory scripts
Conditional logicFull Python — if/else, try/except, loopswhen: clauses, block/rescue, limited branching
Result processingPython objects — parse, transform, export freelyregister vars — limited post-processing
Learning curveRequires Python knowledgeLower — YAML readable by non-developers
EcosystemGrowing — netmiko, napalm, scrapli pluginsLarge — thousands of modules, Galaxy roles
Best forComplex automation, custom logic, CI/CD pipelinesSimple config push, standard workflows, teams with mixed skills

Nornir vs Netmiko/NAPALM Alone

Netmiko and NAPALM are SSH/API connection libraries — they connect to one device at a time and return the result. You could write a for loop over a list of hosts and call Netmiko for each, but you get no parallelism, no inventory management, no structured result aggregation, and no plugin architecture.

Nornir adds the scaffolding around those libraries:

  • Inventory: hosts.yaml defines all devices, their groups, and metadata. No more lists in your script.
  • Parallel execution: ThreadedRunner runs your task against all hosts simultaneously — 200 devices in 45 seconds instead of 45 minutes.
  • Plugin architecture: Netmiko, NAPALM, and Scrapli are task plugins — swap them out without changing your inventory or runner.
  • Result aggregation: Every task returns a structured AggregatedResult object — iterate, filter, report.

Nornir Architecture

// NORNIR COMPONENT ARCHITECTURE Inventory SimpleInventory Plugin hosts.yaml groups.yaml defaults.yaml loads Nornir Engine InitNornir(config_file) nr.run(task=my_task) Filter · Transform · Report ThreadedRunner · 20 workers Plugin Ecosystem Task plugins nornir-netmiko nornir-napalm nornir-scrapli calls plugins Router-1 10.0.1.1 · IOS-XE thread 1 Switch-1 10.0.2.1 · NX-OS thread 2 FW-1 10.0.3.1 · PAN-OS thread 3 parallel SSH AggregatedResult Per-host MultiResult objects · failed flag

Installation and Setup

# Install Nornir core and all major connection plugins
$ pip install nornir nornir-netmiko nornir-napalm nornir-scrapli nornir-utils
# Verify installed versions
$ pip show nornir | grep Version
Version: 3.4.1

Project Directory Structure

nornir_project/
├── config.yaml          # Nornir configuration — inventory plugin, runner settings
├── inventory/
│   ├── hosts.yaml       # All devices — hostname, platform, groups, custom data
│   ├── groups.yaml      # Group-level defaults — platform, connection options
│   └── defaults.yaml    # Global defaults — credentials applied to all hosts
└── tasks/
    ├── compliance.py    # Config compliance check task
    ├── backup.py        # Config backup task
    └── vlan_deploy.py   # VLAN deployment task

Inventory Configuration

The inventory is the core of Nornir — it defines every device, its connection parameters, and any custom metadata your tasks can reference at runtime.

hosts.yaml

yaml
---# Cisco IOS-XE WAN routerscebu-mpls-rtr-01:  hostname: 10.100.1.1  groups:    - ios_routers    - apac_region  data:    site: CEBU-PH    region: APAC    device_type: Router    criticality: High    team: Network-APACmanila-mpls-rtr-01:  hostname: 10.100.2.1  groups:    - ios_routers    - apac_region  data:    site: MANILA-PH    region: APAC    device_type: Router    criticality: High    team: Network-APAC# Cisco Nexus core switchescebu-core-sw-01:  hostname: 10.100.1.10  groups:    - nxos_switches    - apac_region  data:    site: CEBU-PH    region: APAC    device_type: Switch    criticality: High    team: Network-APAC# Palo Alto firewallscebu-fw-01:  hostname: 10.100.1.254  groups:    - panos_firewalls    - apac_region  data:    site: CEBU-PH    region: APAC    device_type: Firewall    criticality: High    team: Network-APAC-Security

groups.yaml

yaml
---ios_routers:  platform: ios  connection_options:    netmiko:      extras:        device_type: cisco_ios        timeout: 60        session_log: /var/log/nornir/netmiko_session.lognxos_switches:  platform: nxos  connection_options:    netmiko:      extras:        device_type: cisco_nxos        timeout: 60panos_firewalls:  platform: panos  connection_options:    netmiko:      extras:        device_type: paloalto_panos        timeout: 30apac_region:  data:    timezone: Asia/Manila    ntp_server: 10.0.0.1    syslog_server: 10.0.0.2

defaults.yaml

yaml
---username: netautopassword: AutoPass2024!connection_options:  netmiko:    extras:      global_delay_factor: 2      conn_timeout: 30  napalm:    extras:      optional_args:        transport: ssh

config.yaml

yaml
---inventory:  plugin: SimpleInventory  options:    host_file: inventory/hosts.yaml    group_file: inventory/groups.yaml    defaults_file: inventory/defaults.yamlrunner:  plugin: ThreadedRunner  options:    num_workers: 20logging:  enabled: true  level: INFO  log_file: /var/log/nornir/nornir.log  format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"

Basic Nornir Script Structure

python
from nornir import InitNornirfrom nornir_netmiko.tasks import netmiko_send_commandfrom nornir_utils.plugins.functions import print_result# Initialise Nornir — loads inventory, configures runnernr = InitNornir(config_file="config.yaml")# Run a task across ALL hosts in parallelresult = nr.run(    task=netmiko_send_command,    command_string="show version")# Pretty-print results to consoleprint_result(result)# Check for any failuresif result.failed:    print(f"\nFailed hosts: {list(result.failed_hosts.keys())}")

The nr.run() call blocks until all hosts complete (or fail). With 20 workers and 200 hosts, this takes roughly the same time as running against 20 hosts sequentially — the bottleneck is device response time, not the framework.


Filtering

Filtering narrows the target host set before running a task. Never modify the base nr object — use nr.filter() which returns a new Nornir object.

# Simple filter — single attribute match
ios_only = nr.filter(platform="ios")
result = ios_only.run(task=netmiko_send_command, command_string="show ip route summary")

# Filter by group membership
from nornir.core.filter import F
apac_routers = nr.filter(F(groups__contains="apac_region") & F(platform="ios"))

# Filter by data field (custom properties)
cebu_site = nr.filter(F(data__site="CEBU-PH"))

# Filter critical devices only
critical = nr.filter(F(data__criticality="High"))

# Combine: APAC region AND high criticality AND router type
apac_critical_routers = nr.filter(
    F(data__region="APAC") &
    F(data__criticality="High") &
    F(data__device_type="Router")
)

# Custom filter function — exclude devices matching a pattern
def not_management(host):
    return "mgmt" not in host.name.lower()

no_mgmt = nr.filter(filter_func=not_management)

Writing Custom Tasks

A Nornir task is a Python function that accepts a Task object as its first argument and returns a Result. Inside the function you can call sub-tasks, run conditionals, parse output, and raise exceptions — it is plain Python.

Task 1: Config Compliance Checker

python
from nornir import InitNornirfrom nornir.core.task import Task, Resultfrom nornir_netmiko.tasks import netmiko_send_commandfrom nornir_utils.plugins.functions import print_result# Required lines that every device MUST have in running-configSECURITY_BASELINE = [    "ntp server 10.0.0.1",    "logging host 10.0.0.2",    "ssh version 2",    "service password-encryption",    "no ip http server",    "no ip http secure-server",    "login block-for 300 attempts 5 within 60",]def compliance_check(task: Task) -> Result:    """    Check device running-config against security baseline.    Returns Result with failed=True if any required lines are missing.    """    # Pull running config via Netmiko sub-task    result = task.run(        task=netmiko_send_command,        command_string="show running-config"    )    running_config = result[0].result    violations = []    for required_line in SECURITY_BASELINE:        if required_line not in running_config:            violations.append(f"MISSING: {required_line}")    if violations:        return Result(            host=task.host,            result="\n".join(violations),            failed=True        )    return Result(        host=task.host,        result="COMPLIANT — all baseline checks passed",        failed=False    )if __name__ == "__main__":    nr = InitNornir(config_file="config.yaml")    # Run compliance check against all IOS devices    ios_devices = nr.filter(platform="ios")    result = ios_devices.run(task=compliance_check)    print_result(result)    # Summary    compliant = [h for h, r in result.items() if not r.failed]    non_compliant = [h for h, r in result.items() if r.failed]    print(f"\nCompliant: {len(compliant)}")    print(f"Non-compliant: {len(non_compliant)}")    for host in non_compliant:        print(f"  {host}:")        for line in result[host][0].result.splitlines():            print(f"    {line}")

Task 2: Mass VLAN Deployment

python
from nornir import InitNornirfrom nornir.core.task import Task, Resultfrom nornir_netmiko.tasks import netmiko_send_command, netmiko_send_configfrom nornir.core.filter import FVLANS_TO_DEPLOY = [    {"id": 100, "name": "DATA_VLAN"},    {"id": 200, "name": "VOICE_VLAN"},    {"id": 300, "name": "MANAGEMENT"},    {"id": 999, "name": "BLACKHOLE"},]def deploy_vlans(task: Task) -> Result:    """    Deploy standard VLAN set to a switch and verify with show vlan brief.    """    # Build config commands    config_commands = []    for vlan in VLANS_TO_DEPLOY:        config_commands.append(f"vlan {vlan['id']}")        config_commands.append(f" name {vlan['name']}")    # Push config    task.run(        task=netmiko_send_config,        config_commands=config_commands    )    # Verify — pull show vlan brief and check each VLAN appears    verify_result = task.run(        task=netmiko_send_command,        command_string="show vlan brief"    )    vlan_output = verify_result[0].result    missing_vlans = []    for vlan in VLANS_TO_DEPLOY:        if str(vlan["id"]) not in vlan_output:            missing_vlans.append(vlan["id"])    if missing_vlans:        return Result(            host=task.host,            result=f"VLAN deployment incomplete — missing VLANs: {missing_vlans}",            failed=True        )    return Result(        host=task.host,        result=f"All {len(VLANS_TO_DEPLOY)} VLANs deployed and verified",        failed=False    )if __name__ == "__main__":    nr = InitNornir(config_file="config.yaml")    # Target only NX-OS switches in APAC    target = nr.filter(        F(groups__contains="nxos_switches") &        F(data__region="APAC")    )    print(f"Deploying VLANs to {len(target.inventory.hosts)} switches...")    result = target.run(task=deploy_vlans)    for host, multi_result in result.items():        status = "OK" if not multi_result.failed else "FAIL"        print(f"  [{status}] {host}: {multi_result[0].result}")

Nornir with NAPALM

NAPALM provides a vendor-agnostic API across IOS, NX-OS, EOS, JunOS, and others. Instead of parsing show command output, NAPALM returns structured Python dictionaries — the same call works across all vendors.

python
from nornir import InitNornirfrom nornir.core.task import Task, Resultfrom nornir_napalm.plugins.tasks import napalm_getimport pandas as pddef get_bgp_neighbors(task: Task) -> Result:    """Collect BGP neighbor state from all routers via NAPALM."""    result = task.run(        task=napalm_get,        getters=["bgp_neighbors"]    )    bgp_data = result[0].result.get("bgp_neighbors", {})    neighbors = []    # NAPALM bgp_neighbors structure: {vrf: {peer_ip: {peer_data}}}    for vrf, vrf_data in bgp_data.items():        peers = vrf_data.get("peers", {})        for peer_ip, peer_data in peers.items():            neighbors.append({                "host": str(task.host),                "site": task.host.data.get("site", "unknown"),                "vrf": vrf,                "peer_ip": peer_ip,                "remote_as": peer_data.get("remote_as"),                "is_up": peer_data.get("is_up"),                "uptime": peer_data.get("uptime", -1),                "received_prefixes": peer_data.get("address_family", {})                                               .get("ipv4", {})                                               .get("received_prefixes", 0),            })    return Result(        host=task.host,        result=neighbors,        failed=False    )if __name__ == "__main__":    nr = InitNornir(config_file="config.yaml")    routers = nr.filter(data__device_type="Router")    result = routers.run(task=get_bgp_neighbors)    # Aggregate all BGP neighbors into a single list    all_neighbors = []    for host, multi_result in result.items():        if not multi_result.failed:            all_neighbors.extend(multi_result[0].result)    # Convert to DataFrame for analysis    df = pd.DataFrame(all_neighbors)    # Find down peers    down_peers = df[df["is_up"] == False]    if not down_peers.empty:        print("DOWN BGP PEERS:")        print(down_peers[["host", "site", "peer_ip", "remote_as"]].to_string(index=False))    else:        print(f"All {len(df)} BGP peers are established.")    # Export full report    df.to_csv("bgp_neighbor_report.csv", index=False)    print(f"\nFull report saved: bgp_neighbor_report.csv ({len(df)} peers)")

NAPALM Config Replace with Rollback

One of NAPALM's most powerful features is candidate config management — load a new config, compare the diff, commit if correct, or roll back:

python
from nornir_napalm.plugins.tasks import napalm_configure, napalm_getdef safe_config_replace(task: Task, config_path: str) -> Result:    """    Load a candidate config, show diff, and commit only if diff is non-empty.    Supports rollback on failure.    """    with open(config_path) as f:        new_config = f.read()    # Load candidate — does NOT commit yet    diff_result = task.run(        task=napalm_configure,        configuration=new_config,        replace=False,   # merge, not replace        dry_run=True     # returns diff only    )    diff = diff_result[0].result    if not diff:        return Result(host=task.host, result="No changes — config already matches", failed=False)    print(f"[{task.host}] Diff:\n{diff}")    # Commit if diff looks sane (add your own validation here)    task.run(        task=napalm_configure,        configuration=new_config,        replace=False,        dry_run=False   # actual commit    )    return Result(host=task.host, result=f"Config applied:\n{diff}", failed=False)

Nornir with Scrapli

Scrapli is a faster, async-capable alternative to Netmiko. For large deployments running read-only tasks (show commands, config pulls), Scrapli's performance advantage is measurable — it uses fewer resources and has lower connection overhead.

python
from nornir import InitNornirfrom nornir_scrapli.tasks import send_command, send_configsfrom nornir.core.filter import Fnr = InitNornir(config_file="config.yaml")# Scrapli task interface is identical to Netmiko — just different importresult = nr.filter(platform="ios").run(    task=send_command,    command="show ip interface brief")for host, multi_result in result.items():    if not multi_result.failed:        print(f"\n{host}:")        print(multi_result[0].result)

Scrapli also supports the async runner via AsyncNornir — useful when you need non-blocking I/O and are already working in an async Python environment:

python
import asynciofrom nornir import InitNornirfrom nornir_scrapli.tasks import async_send_commandasync def main():    nr = InitNornir(config_file="config.yaml")    # AsyncRunner must be configured in config.yaml    result = await nr.run(        task=async_send_command,        command="show version"    )    return resultasyncio.run(main())

Result Handling and Reporting

python
from nornir import InitNornirfrom nornir.core.task import Task, Resultfrom nornir_netmiko.tasks import netmiko_send_commandimport csvimport jsonfrom datetime import datetimedef run_compliance_and_report(nr):    """Run compliance check and generate CSV + JSON reports."""    from tasks.compliance import compliance_check    result = nr.run(task=compliance_check)    # Separate results by status    compliant_hosts = []    failed_hosts = []    for host, multi_result in result.items():        if multi_result.failed:            failed_hosts.append(host)            print(f"[FAIL] {host}: {multi_result[0].result[:80]}...")        else:            compliant_hosts.append(host)    print(f"\nResults: {len(compliant_hosts)} compliant, {len(failed_hosts)} non-compliant")    # Generate CSV report    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")    csv_path = f"compliance_report_{timestamp}.csv"    with open(csv_path, "w", newline="") as f:        writer = csv.writer(f)        writer.writerow(["Host", "Site", "Region", "Status", "Violations", "Timestamp"])        for host, multi_result in result.items():            host_obj = nr.inventory.hosts[host]            status = "FAILED" if multi_result.failed else "COMPLIANT"            violations = multi_result[0].result if multi_result.failed else ""            writer.writerow([                host,                host_obj.data.get("site", ""),                host_obj.data.get("region", ""),                status,                violations,                datetime.now().isoformat()            ])    print(f"CSV report: {csv_path}")    # Generate JSON report for programmatic consumption / CMDB upload    json_report = {        "timestamp": datetime.now().isoformat(),        "summary": {            "total": len(result),            "compliant": len(compliant_hosts),            "non_compliant": len(failed_hosts),        },        "hosts": {}    }    for host, multi_result in result.items():        host_obj = nr.inventory.hosts[host]        json_report["hosts"][host] = {            "status": "FAILED" if multi_result.failed else "COMPLIANT",            "site": host_obj.data.get("site", ""),            "region": host_obj.data.get("region", ""),            "violations": multi_result[0].result.splitlines() if multi_result.failed else [],        }    json_path = f"compliance_report_{timestamp}.json"    with open(json_path, "w") as f:        json.dump(json_report, f, indent=2)    print(f"JSON report: {json_path}")    return result

Real-World Use Cases

1. Daily Config Backup (2am Cron Job)

python
"""backup_configs.py — runs nightly via cron at 02:00Pulls running-config from all devices and saves to Git repo."""import osimport subprocessfrom datetime import datetimefrom nornir import InitNornirfrom nornir.core.task import Task, Resultfrom nornir_netmiko.tasks import netmiko_send_commandBACKUP_DIR = "/opt/config-backups"def backup_config(task: Task) -> Result:    result = task.run(        task=netmiko_send_command,        command_string="show running-config"    )    config = result[0].result    site = task.host.data.get("site", "UNKNOWN")    site_dir = os.path.join(BACKUP_DIR, site)    os.makedirs(site_dir, exist_ok=True)    filename = f"{task.host.name}.cfg"    filepath = os.path.join(site_dir, filename)    with open(filepath, "w") as f:        f.write(config)    return Result(        host=task.host,        result=f"Backed up to {filepath} ({len(config)} bytes)",        failed=False    )if __name__ == "__main__":    nr = InitNornir(config_file="config.yaml")    result = nr.run(task=backup_config)    failed = [h for h, r in result.items() if r.failed]    succeeded = len(result) - len(failed)    print(f"Backup complete: {succeeded} succeeded, {len(failed)} failed")    # Commit to Git    subprocess.run(["git", "-C", BACKUP_DIR, "add", "-A"], check=True)    subprocess.run([        "git", "-C", BACKUP_DIR, "commit", "-m",        f"Nightly backup — {datetime.now().strftime('%Y-%m-%d %H:%M')} — {succeeded} devices"    ], check=False)  # check=False — no error if nothing changed

2. BGP Neighbor Health Check Across 42 Countries

python
"""bgp_health.py — check all BGP peers, alert on Down/Idle state.Integrates with PagerDuty to page on-call if critical peers are down."""import requestsfrom nornir import InitNornirfrom nornir_napalm.plugins.tasks import napalm_getPAGERDUTY_KEY = "your-routing-key-here"def alert_pagerduty(host, down_peers):    payload = {        "routing_key": PAGERDUTY_KEY,        "event_action": "trigger",        "dedup_key": f"{host}-bgp-health",        "payload": {            "summary": f"BGP peers down on {host}: {', '.join(down_peers)}",            "source": host,            "severity": "critical",            "class": "bgp",        }    }    requests.post("https://events.pagerduty.com/v2/enqueue", json=payload)if __name__ == "__main__":    nr = InitNornir(config_file="config.yaml")    routers = nr.filter(data__device_type="Router")    result = routers.run(task=napalm_get, getters=["bgp_neighbors"])    for host, multi_result in result.items():        if multi_result.failed:            print(f"[ERROR] Could not reach {host}")            continue        bgp_data = multi_result[0].result.get("bgp_neighbors", {})        down_peers = []        for vrf, vrf_data in bgp_data.items():            for peer_ip, peer_data in vrf_data.get("peers", {}).items():                if not peer_data.get("is_up", True):                    down_peers.append(peer_ip)        if down_peers:            print(f"[DOWN] {host}: {len(down_peers)} peer(s) down — {down_peers}")            criticality = nr.inventory.hosts[host].data.get("criticality", "Low")            if criticality == "High":                alert_pagerduty(host, down_peers)        else:            peers = sum(                len(v.get("peers", {}))                for v in bgp_data.values()            )            print(f"[OK]   {host}: {peers} BGP peer(s) all established")

3. VLAN Audit — Verify Consistency Across a Site

python
"""vlan_audit.py — verify that all switches in a site have the same VLANs.Flags any switch missing a VLAN that other switches in the site have."""from collections import defaultdictfrom nornir import InitNornirfrom nornir.core.task import Task, Resultfrom nornir_netmiko.tasks import netmiko_send_commandimport textfsmimport ioVLAN_TEMPLATE = """Value VLAN_ID (\d+)Value NAME (\S+)Value STATUS (active|act/lshut|suspended)Start  ^${VLAN_ID}\s+${NAME}\s+${STATUS} -> Record"""def get_vlans(task: Task) -> Result:    result = task.run(        task=netmiko_send_command,        command_string="show vlan brief"    )    # Parse with TextFSM    fsm = textfsm.TextFSM(io.StringIO(VLAN_TEMPLATE))    parsed = fsm.ParseText(result[0].result)    vlans = {int(row[0]) for row in parsed if row[2] == "active"}    return Result(host=task.host, result=vlans, failed=False)if __name__ == "__main__":    nr = InitNornir(config_file="config.yaml")    # Audit CEBU site switches    cebu_switches = nr.filter(        data__site="CEBU-PH",        data__device_type="Switch"    )    result = cebu_switches.run(task=get_vlans)    # Find the union of all VLANs across all switches    all_vlans = set()    host_vlans = {}    for host, multi_result in result.items():        if not multi_result.failed:            vlans = multi_result[0].result            host_vlans[host] = vlans            all_vlans.update(vlans)    # Report missing VLANs per switch    print(f"Total unique VLANs in site: {len(all_vlans)}")    for host, vlans in host_vlans.items():        missing = all_vlans - vlans        if missing:            print(f"  [MISSING] {host}: VLANs {sorted(missing)}")        else:            print(f"  [OK]      {host}: all {len(vlans)} VLANs present")
// NORNIR COMPLIANCE WORKFLOW — PARALLEL EXECUTION hosts.yaml 200 devices defined APAC · EMEA · AMER platform · site · criticality loads Nornir Engine nr.run(task=compliance_check) ThreadedRunner · 20 workers Dispatches to all 200 hosts simultaneously in thread pool R1 · R2 · R3 thread 1-3 SW1 · SW2 thread 4-5 FW1 · FW2 thread 6-7 ... 20 threads up to 200 hosts parallel SSH · compliance_check task AggregatedResult → CSV + JSON Report 200 devices in ~45 sec vs ~50 min sequential compliance_report_20260313_020000.csv

Performance: Threading Tuning

The num_workers setting in config.yaml controls how many hosts are targeted simultaneously. Choosing the right value depends on what the task does:

Task TypeRecommended num_workersReason
Read-only (show commands, config pull)50No device state changes — safe to hammer many devices at once
Config push (send_config)10Reduces risk of simultaneous changes causing cascading issues
Config replace (NAPALM)5High-impact operation — slow down to allow engineer oversight
SNMP/API polling50–100Lightweight requests — connection overhead dominates, not CPU
Firmware upgrade3–5Devices reboot — stagger to avoid simultaneous outages

SSH Connection Limits on IOS

Cisco IOS defaults to a maximum of 5 concurrent SSH sessions per device (line vty 0 4). If your Nornir runner targets a single device with more than 5 parallel sub-tasks, the 6th connection will be refused. For most use cases this is not an issue since each host gets one thread. It matters when you're running multiple Nornir runs simultaneously, or when devices share a jump host that has SSH multiplexing restrictions.

# Increase VTY lines on IOS to support more concurrent sessions
Router(config)# line vty 0 15
Router(config-line)#  transport input ssh
Router(config-line)#  login local
Router(config-line)#  exec-timeout 10 0
# Now supports 16 concurrent SSH sessions (vty 0-15)

# Check current VTY sessions
Router# show users
   Line       User       Host(s)         Idle       Location
*  3 vty 0    netauto    idle            00:00:02   10.0.0.10
   4 vty 1    netauto    idle            00:00:01   10.0.0.10

Practical Timing Benchmarks

ScenarioSequentialNornir 20 workersNornir 50 workers
show version — 200 IOS devices~50 min~3 min~90 sec
show running-config — 200 devices~80 min~5 min~2.5 min
Config push (10 commands) — 200 devices~60 min~4 minNot recommended
NAPALM get_bgp_neighbors — 50 routers~25 min~90 sec~45 sec

The performance gain is not linear because the bottleneck shifts from "how long does each device take to respond" (sequential) to "how long does the slowest device in the batch take to respond" (parallel). With 200 devices and 20 workers, you run 10 batches of 20 — your total time is roughly 10 × (slowest_device_response_time).


Nornir's value proposition is straightforward: if you already know Python, you don't need to learn a new DSL. You write functions, call them with nr.run(), and process the results with the full Python standard library and ecosystem. The inventory YAML keeps device data clean and separate from task logic. The plugin system means you can swap Netmiko for Scrapli without touching your task code. For a network engineer who scripts regularly, Nornir is the tool that makes automation feel like engineering rather than configuration management.