Overview
A proper network lab is non-negotiable for anyone preparing for CCNP, PCNSE, or any senior network certification. You need to break things, rebuild them, and practice failure scenarios that a production environment will never tolerate. The question is which platform to build it on.
EVE-NG vs the alternatives:
EVE-NG (Emulated Virtual Environment — Next Generation) wins for most practicing engineers for a straightforward reason: it runs entirely in a browser. There is no client software to install, no GNS3 server daemon to babysit, and no Cisco subscription required for the platform itself. You spin up a topology, share the URL with a colleague, and they can open every console from their laptop. That matters when you are studying in a team or running lab demos for clients.
GNS3 is the longtime open-source standard and still excellent. It requires a GNS3 server process (local or remote), and the client–server model adds friction. For Cisco-heavy labs with Dynamips images it remains a solid choice, and its marketplace of pre-built appliances is mature.
Cisco CML (formerly VIRL) is Cisco's own simulation platform. It integrates beautifully with Cisco images and the Smart Licensing ecosystem, but it is subscription-gated, Cisco-centric, and does not support third-party images like Palo Alto or VeloCloud natively. If your world is 100% Cisco, CML is worth evaluating. If you run a mixed-vendor environment — which is most enterprise networks — EVE-NG is the better fit.
EVE-NG Community is free and supports virtually every image format: IOL, Dynamips, QEMU. The Community edition handles all the topologies covered in this post. EVE-NG Pro adds Docker node support, multi-tenancy, a more polished UI, and a few performance improvements — it is worth the cost if you are running a training lab for a team or using EVE-NG in a professional services context.
Server Requirements
EVE-NG runs VMs inside VMs. Whether you install it on bare metal or inside VMware ESXi, the underlying hardware must support nested virtualization — specifically Intel VT-x (VMX) with VT-d, or AMD-V with AMD-Vi. Without hardware-assisted virtualization, QEMU nodes will either fail to start or run so slowly they are unusable.
Bare Metal vs Nested VMware — Why It Matters
EVE-NG itself is a Linux-based hypervisor. When you run it inside VMware Workstation or ESXi, you are running a hypervisor inside a hypervisor — nested virtualization. This works, but with caveats:
VMware Workstation / Fusion: Enable "Virtualize Intel VT-x/EPT" in the VM settings. Performance is acceptable for small labs (5–10 nodes). CPU overhead is noticeable for QEMU-based nodes.
VMware ESXi: Enable "Hardware virtualization" in the VM's CPU settings (expose hardware-assisted virtualization to the guest OS). ESXi is a Type-1 hypervisor, so the nesting penalty is lower than Workstation. This is the most common enterprise deployment.
Bare Metal: No nesting overhead. EVE-NG runs directly on the hardware. Best performance — a lab that runs 8 PA-VM nodes on nested ESXi might comfortably run 15 on bare metal. If you are building a dedicated lab server, bare metal Ubuntu is the right choice.
KVM Host: EVE-NG also runs well on a KVM host. Pass through VT-x to the EVE-NG guest VM and performance is comparable to ESXi.
The single biggest performance bottleneck in large labs is storage I/O. Every node boots from a disk image. Put your image directory (/opt/unetlab/) on an SSD or NVMe. Running 10+ QEMU nodes off a spinning disk produces painfully slow boot times.
EVE-NG Installation
Option 1: OVA Import to VMware (Fastest)
Download the EVE-NG Community OVA from www.eve-ng.net. Import into VMware Workstation or ESXi via File → Deploy OVF Template. Before powering on, allocate RAM and CPUs generously and enable hardware virtualization passthrough.
# After first boot — initial setup wizard runs automatically
# Set root password, hostname, management IP
# Default credentials: root / eve
root@eve-ng:~# passwd root
# Change root password immediately
root@eve-ng:~# hostnamectl set-hostname eve-lab-01
root@eve-ng:~# nano /etc/network/interfaces
# Configure eth0 with static IP for management
root@eve-ng:~# systemctl restart networking
Option 2: Ubuntu + EVE-NG Package (Most Control)
Start with a clean Ubuntu 20.04 LTS or 22.04 LTS server installation. Minimal install is fine — no desktop environment needed.
# Update base system
root@ubuntu:~# apt update && apt upgrade -y
# Add EVE-NG repository and install
root@ubuntu:~# wget -O - https://www.eve-ng.net/repo/install-eve.sh | bash -i
# Reboot — EVE-NG will configure itself on next boot
root@ubuntu:~# reboot
# After reboot: access web UI at http://<mgmt-ip>/
# Default admin credentials: admin / eve
Post-Install Configuration
# 1. Change web UI admin password
root@eve-ng:~# /opt/unetlab/wrappers/unl_wrapper -a setpassword -u admin -p YourNewPassword
# 2. Set NTP server
root@eve-ng:~# nano /etc/ntp.conf
# Add: server pool.ntp.org iburst
root@eve-ng:~# systemctl enable ntp && systemctl start ntp
# 3. Verify virtualization support — must show vmx or svm
root@eve-ng:~# grep -m1 -E 'vmx|svm' /proc/cpuinfo | awk '{print $3}'
# 4. Check EVE-NG version and license status
root@eve-ng:~# dpkg -l | grep eve-ng
# Community edition requires no license file — Pro requires license activation
Image Management
EVE-NG supports three image formats: IOL (IOS on Linux), Dynamips, and QEMU. Each has a different directory and different permission requirements. The golden rule: always run fixpermissions after adding any image.
Cisco IOS — IOL Format
IOL (IOS on Linux) images are the most efficient for routing-focused labs. They boot in seconds and use very little RAM. You need a valid Cisco IOL license or access through Cisco DevNet.
# IOL image directory
root@eve-ng:~# ls /opt/unetlab/addons/iol/bin/
# Upload images via SCP or SFTP from your local machine
root@eve-ng:~# scp i86bi_linux-adventerprisek9-ms.156-2.T.bin root@eve-ip:/opt/unetlab/addons/iol/bin/
# Set correct permissions and ownership
root@eve-ng:~# chmod 755 /opt/unetlab/addons/iol/bin/*.bin
root@eve-ng:~# chown root:root /opt/unetlab/addons/iol/bin/*.bin
# Fix all permissions system-wide — run after every image addition
root@eve-ng:~# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions
Supported IOL image types: IOSv (router), IOSvL2 (Layer 2 switch with STP/VTP/VLAN support), CSR1000v (cloud services router), ASAv (adaptive security appliance), NX-OSv (Nexus), XRv (IOS XR). IOSvL2 is essential for any topology that exercises spanning tree, VTP, or inter-VLAN routing at Layer 2.
Cisco NX-OS (Nexus Simulation)
NX-OSv 9000 is the software-only Nexus 9000 simulator. It is the correct image for CCNP Data Center and CCIE DC study — it supports VXLAN, EVPN, and VPC natively and behaves identically to physical Nexus 9000 hardware from a feature perspective.
# NX-OSv 9000 requires a dedicated qemu directory — naming is critical
root@eve-ng:~# mkdir -p /opt/unetlab/addons/qemu/nxosv9k-9.3.10/
# Upload the disk image — rename to disk0.qcow2 exactly
root@eve-ng:~# cp nxosv9k.9.3.10.qcow2 /opt/unetlab/addons/qemu/nxosv9k-9.3.10/disk0.qcow2
root@eve-ng:~# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions
# Minimum RAM per NX-OSv 9000 node: 4096 MB — set this in node properties
# First boot time: 8-12 minutes — this is completely normal
# Login after boot: admin / (no password) then set password
Palo Alto VM-Series (PA-VM)
The PA-VM is the most resource-intensive node you will run in EVE-NG. It requires a proper qcow2 image obtained from Palo Alto's support portal (requires a valid support account or evaluation registration at support.paloalto.com).
# Create the PA-VM image directory — version in directory name must match image
root@eve-ng:~# mkdir -p /opt/unetlab/addons/qemu/paloalto-10.2.4/
# Upload and rename the qcow2 image to virtioa.qcow2
root@eve-ng:~# cp PA-VM-KVM-10.2.4.qcow2 /opt/unetlab/addons/qemu/paloalto-10.2.4/virtioa.qcow2
root@eve-ng:~# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions
# Node properties to set in EVE-NG UI:
# RAM: 4096 MB minimum — 6144 MB recommended for full feature use
# CPU: 2 vCPUs minimum
# NICs: e0 = management, e1+ = data plane interfaces
# First boot: 5-8 minutes — the VM goes silent then outputs login prompt
# Default management IP: 192.168.1.1 — access via https://192.168.1.1
# Default credentials: admin / admin — change on first login
PA-VM lab realities you need to know: The management interface in PAN-OS (labeled Management in the GUI) maps to ethernet1/0 in EVE-NG — it is always isolated from data plane interfaces. Data interfaces start at ethernet1/1. When building HA, the HA1 link (control plane sync) and HA2 link (data plane sync) each need dedicated interfaces — plan your NIC count accordingly. For a full PCNSE HA lab, each PA-VM needs at minimum 4 interfaces: management, HA1, HA2, and at least one data interface.
VeloCloud VCE (SD-WAN Edge)
VeloCloud Edge runs as a QEMU VM in EVE-NG. Unlike Cisco and Palo Alto, VeloCloud requires a live connection to a VeloCloud Orchestrator (VCO) to fully activate and push SD-WAN policy. For lab use, you can use VMware's lab VCO (available through partner programs) or connect to a trial VCO instance.
# VeloCloud Edge image directory setup
root@eve-ng:~# mkdir -p /opt/unetlab/addons/qemu/velocloud-4.5.0/
root@eve-ng:~# cp VCE-VCMP-latest.qcow2 /opt/unetlab/addons/qemu/velocloud-4.5.0/virtioa.qcow2
root@eve-ng:~# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions
# VCE RAM: 1024 MB minimum, 2048 MB recommended
# VCE requires VCO connectivity for ZTP (Zero Touch Provisioning)
# On first boot, VCE broadcasts for VCO — configure activation key via console
# Activation key is obtained from the VCO portal under Edge provisioning
EVE-NG Lab Architecture Diagram
Building Your First Lab Topology
CCNP Enterprise Study Lab
The CCNP Enterprise (ENCOR 350-401) exam covers OSPF, BGP, STP, QoS, SD-WAN, and wireless. A complete study topology needs layer-by-layer representation: WAN/core, distribution, and access with hosts.
Creating a new lab in the EVE-NG web UI — step by step:
- Log in at
http://<eve-ip>/with admin credentials - Click Add new lab in the left panel — name it (e.g.,
CCNP-ENCOR-Study) - Inside the lab canvas, right-click → Add an object → Node
- Search for your image type (e.g.,
IOSv,IOSvL2,CSR1000v) - Set node name, RAM allocation, number of Ethernet adapters, then click Add
- Node appears on the canvas — drag to position it
- To connect nodes: hover over a node until the port dots appear → click a port → drag the cable to another node's port → click the destination port
- Right-click any node → Start to boot it individually, or use the toolbar Start all nodes
- Right-click a running node → Console to open the terminal in your browser
Reference CCNP topology — complete node list:
PCNSE Study Lab (Palo Alto Networks)
The PCNSE (Palo Alto Networks Certified Network Security Engineer) exam tests Security Policies, NAT, App-ID, User-ID, GlobalProtect, HA, and Panorama. Your lab must cover all of these with realistic traffic flows.
Minimum PCNSE lab — complete node list:
Key PCNSE scenarios to practice in this lab:
- Security policy rules: allow inside-to-outside HTTP/HTTPS by App-ID, block social media by application category
- NAT: dynamic IP-and-port (DIPP) source NAT for outbound, destination NAT for inbound published services
- App-ID: observe how PAN-OS re-classifies port 80 traffic as the actual application (YouTube, Facebook, etc.)
- User-ID: configure Windows Security Log monitoring agent to map IP addresses to usernames
- GlobalProtect: configure portal and gateway on PA-Active, test VPN tunnel from Mgmt PC
- HA active/passive failover: use
request high-availability state suspendon active, verify traffic continues via standby - Panorama: push a policy change from Panorama to both firewalls, verify commit propagation and log forwarding
Network Bridges — Connecting the Lab to a Real Network
One of EVE-NG's most powerful features is its bridge system. A Cloud object in your topology creates a Linux bridge that passes traffic to a physical NIC on the EVE-NG host. Lab VMs connected to a Cloud node can reach real networks.
Management bridge — pnet0: Always maps to eth0 — your EVE-NG management interface. Nodes connected to pnet0 reach your management network. Useful for reaching lab nodes via real IP for Ansible or API testing, but mind the routing implications.
Cloud bridges — pnet1 through pnet9: Each maps to an additional physical NIC (eth1 through eth9). Connect a physical switch port to eth1, add a Cloud(1) node to your topology, and any lab node connected to Cloud(1) has real physical LAN access — and through it, real internet access if routed.
# View current bridge configuration on EVE-NG host
root@eve-ng:~# brctl show
# Typical output:
# bridge name bridge id STP enabled interfaces
# pnet0 8000.000c29a1b2c3 no eth0
# pnet1 8000.000c29a1b2c4 no eth1
# In EVE-NG UI: right-click canvas → Add object → Network
# Select "Management(Cloud0)" for pnet0 or "Cloud1"–"Cloud9" for pnet1–pnet9
# Verify a lab node has joined the bridge after connecting
root@eve-ng:~# brctl showmacs pnet1
Real-world use cases for cloud bridges:
- Lab router pulls real IOS images from Cisco CCO (requires valid contract)
- BGP peering test against a real router on your physical lab rack
- Wireshark packet capture from your management PC against pnet1 traffic
- Hybrid topology: EVE-NG virtual nodes + physical switches sharing the same VLAN via pnet bridge
Console Access
Every node type uses a different console protocol. Understanding this prevents wasted time when you cannot connect.
External console client integration: EVE-NG can export a telnet shortcut pack (.zip) with pre-configured session files for SecureCRT or PuTTY. For Cisco nodes, connect directly to <eve-ip>:327xx via Telnet from any terminal client. Many engineers prefer SecureCRT's tabbed session management and scrollback over the browser console for complex multi-node troubleshooting.
# Find the console port for a running node via EVE-NG API
root@eve-ng:~# curl -s http://localhost/api/labs/CCNP-Study.unl/nodes | python3 -m json.tool | grep -A3 console
# Or check via UI: right-click node → Info → Console port shown in popup
# Direct Telnet from your PC to a Cisco node (example port 32768)
user@laptop:~$ telnet 192.168.1.10 32768
# Direct VNC from your PC to a PA-VM node (example port 5901)
user@laptop:~$ vncviewer 192.168.1.10:5901
Performance Tips
After running EVE-NG labs across multiple server platforms, these optimizations are what separate a smooth lab session from a constant battle with frozen nodes.
RAM allocation — be precise, not generous: NX-OSv 9000 genuinely needs 4–6 GB. PA-VM needs 4 GB minimum (6 GB if running App-ID, threat prevention, and URL filtering simultaneously). Cisco IOSv is remarkably lean at 256 MB sufficient, 512 MB comfortable. Do not over-allocate to IOSv nodes — that RAM cannot be used by NX-OS nodes that actually need it.
CPU — idle-pc for Dynamips nodes: If running Dynamips images (older IOS formats: 7200, 3745, 2691), enable idle-pc. Without it, each Dynamips node spins a core at 100% CPU even at idle. Right-click a fully booted node → Idle-PC → let EVE-NG calculate the value → apply it. CPU usage drops from 100% to under 5%.
Snapshot your configs before breaking changes:
# Before any potentially destructive lab exercise — save configs
Core-R1# copy running-config startup-config
Core-R1# copy running-config tftp://192.168.1.10/backups/Core-R1-pre-change.cfg
# For Palo Alto — commit and export named config before testing
admin@PA-Active> save named-config pre-lab-change
admin@PA-Active> scp export configuration from pre-lab-change.xml to user@192.168.1.10:backups/
# EVE-NG stores startup-config in the .unl lab file for stopped nodes
# Export the lab via File → Export Lab before a session to capture all startup configs
Storage layout — the most impactful optimization: Keep /opt/unetlab/ on an SSD or NVMe. Boot time for a 10-node topology on SSD vs spinning disk is typically 3–5x faster. If your OS is on a spinner, mount a dedicated SSD volume to /opt/unetlab/addons/ at minimum.
Lab hygiene: Delete labs you are not actively using. Running nodes in abandoned labs consume RAM and CPU continuously. EVE-NG has no automatic node timeout — every node runs until you stop it or the server reboots. Check active node count with:
# Check all running nodes across all labs
root@eve-ng:~# ps aux | grep qemu | grep -v grep | wc -l
# List open lab files with active processes
root@eve-ng:~# ls /opt/unetlab/tmp/0/
# Free memory summary
root@eve-ng:~# free -h
Image Legality Note
This is not a technicality — it matters professionally and ethically. Only use images you have the legal right to run:
Cisco: Images extracted from physical devices you own are licensed to you for use on those devices — using them in a VM emulator is a grey area in Cisco's EULA. The safest and clearest path: CSR1000v has a free trial available via Cisco DevNet. IOSvL2 is available to CCNP-level Cisco Learning Network subscribers. NX-OSv 9000 is available via the Cisco DevNet Always-On sandboxes and as a downloadable image with a DevNet account. Cisco VIRL Personal Edition also provided a legitimate image bundle — its successor CML provides the same.
Palo Alto: PA-VM evaluation images are available at support.paloalto.com after registering a free account. You receive a 60-day evaluation license that enables all features including Threat Prevention and URL Filtering. Panorama VM is similarly available under eval terms.
VeloCloud/VMware: Available through the VMware SD-WAN trial program or VMware Partner Network access. Contact your VMware account team for lab access.
Do not download images from unofficial forums, torrent sites, or shared drives. Beyond the legal exposure, unofficial images are frequently modified, may contain backdoors, and often have corrupt disk structures that waste hours of debugging time. The legitimate paths are well-documented and most are free for lab use.
Recommended Study Workflow
Running an EVE-NG lab without a study plan is less productive than it looks. A structured approach compounds your learning:
- Read the concept first — understand OSPF LSA types and area boundaries before configuring them
- Build the minimal topology — start with 2 routers, not 8; add nodes as you need them
- Configure manually, by hand — no copy-paste initially; type every command to build muscle memory
- Break it intentionally — shut interfaces, inject incorrect routes, force HA failover, test what the protocol actually does under failure
- Verify with show commands — document what correct output looks like so you recognize incorrect output instantly
- Rebuild from memory — close the guide, wipe the node config, reconfigure from scratch until it works
- Add complexity — extend the topology once the core scenario is solid; add redundancy, add another protocol layer
The lab is a tool, not the goal. The goal is to understand the protocol or feature well enough to configure it under pressure, troubleshoot it from first principles without a guide, and explain it clearly to a non-technical stakeholder. EVE-NG gives you the environment — deliberate practice is your job.