Overview
Cisco Nexus switches support two complementary virtualization technologies that are commonly deployed together in enterprise data centers. VDC (Virtual Device Context) partitions a single physical Nexus 7000/7700 chassis into up to four independent virtual switches โ each with its own configuration namespace, routing table, management plane, and failure domain. vPC (Virtual Port Channel) allows two physical switches to present themselves as a single logical switch to downstream devices, eliminating Spanning Tree blocked ports and delivering active-active uplink redundancy.
Understanding how these two technologies interact โ and how to troubleshoot them when they break โ is essential for anyone operating a Nexus-based data center fabric.
Architecture Topology
The diagram below shows a full N7K pair with four VDCs per chassis, vPC peer-link and keepalive paths, and downstream N5K access switches dual-homed via vPC port-channels to a server.
Part 1 โ VDC Architecture and Design
What is a VDC?
A VDC is a logical partition of a physical Nexus 7000/7700 chassis. Each VDC operates as a completely independent virtual switch with its own configuration namespace, management plane (separate SSH, separate logs, separate show outputs), routing table, and failure domain. A software crash or misconfiguration in one VDC cannot bring down another.
VDC-1 (the Admin or Default VDC) is always present, cannot be deleted, and owns the supervisor modules. It can create, suspend, and delete other VDCs โ and is the only VDC with access to chassis-level hardware operations.
What is isolated per VDC:
- Interface assignments (each physical port belongs to exactly one VDC)
- VLAN database, VRFs, and routing table
- Feature licensing and feature enables
- Spanning Tree domain
- SNMP, syslog, and management access
Platform support: VDCs are supported on Nexus 7000 and 7700 only. Not available on 5000, 6000, or 9000 series.
Standard 4-VDC Layout
The key design principle: a misconfiguration in VDC-FW (e.g., a bad ACL) cannot affect VDC-Core's BGP sessions. Each VDC has its own failure boundary.
Creating and Allocating VDCs
! ===== From VDC-1 (Admin VDC only) =====
N7K-1(config)# vdc VDC-Core id 2
N7K-1(config-vdc)# allocate interface ethernet 2/1-24
N7K-1(config-vdc)# limit-resource vlan minimum 16 maximum 4094
N7K-1(config-vdc)# limit-resource monitor-session minimum 0 maximum 2
N7K-1(config)# vdc VDC-Access id 3
N7K-1(config-vdc)# allocate interface ethernet 3/1-48
N7K-1(config-vdc)# limit-resource vlan minimum 16 maximum 4094
# Switch into a VDC to configure it
N7K-1# switchto vdc VDC-Access
N7K-1-VDC-Access# configure terminal
# All commands here are isolated to VDC-Access
# Return to Admin VDC
N7K-1-VDC-Access# switchback
Part 2 โ vPC Architecture and Design
vPC Component Roles
vPC requires three components configured identically on both peer switches:
Why the keepalive path matters: If the keepalive link traverses the peer-link, then a peer-link failure causes both switches to lose the keepalive simultaneously. Both will assume the other is dead and claim the primary role โ producing a dual-active split-brain where both peers forward independently with duplicate IPs and MACs.
vPC Failover Topology
The second diagram shows what happens when the N7K-1 peer-link fails: the secondary suspends its vPC member ports, all traffic reroutes through N7K-2, and the keepalive continues over the management network to confirm the peer is still alive.
vPC Configuration
! ===== N7K-1 VDC-Access โ Full vPC Config =====
VDC-Access-1(config)# feature vpc
VDC-Access-1(config)# feature lacp
# Step 1: vPC domain
VDC-Access-1(config)# vpc domain 1
VDC-Access-1(config-vpc-domain)# peer-keepalive destination 10.255.1.2 source 10.255.1.1 vrf management
VDC-Access-1(config-vpc-domain)# peer-gateway
# peer-gateway: each peer can route on behalf of the other's MAC โ prevents HSRP/VRRP issues
VDC-Access-1(config-vpc-domain)# auto-recovery
VDC-Access-1(config-vpc-domain)# auto-recovery reload-delay 240
VDC-Access-1(config-vpc-domain)# ip arp synchronize
VDC-Access-1(config-vpc-domain)# ipv6 nd synchronize
VDC-Access-1(config-vpc-domain)# delay restore 150
# delay restore: wait 150s after reboot before restoring vPC ports (STP convergence)
# Step 2: Peer-link port-channel
VDC-Access-1(config)# interface port-channel 1
VDC-Access-1(config-if)# switchport mode trunk
VDC-Access-1(config-if)# switchport trunk allowed vlan all
VDC-Access-1(config-if)# spanning-tree port type network
VDC-Access-1(config-if)# vpc peer-link
# Step 3: Add physical interfaces to peer-link
VDC-Access-1(config)# interface ethernet 3/1-2
VDC-Access-1(config-if-range)# channel-group 1 mode active
VDC-Access-1(config-if-range)# no shutdown
# Step 4: vPC member port-channel to downstream switch
VDC-Access-1(config)# interface port-channel 10
VDC-Access-1(config-if)# switchport mode trunk
VDC-Access-1(config-if)# switchport trunk allowed vlan 100,200,300
VDC-Access-1(config-if)# vpc 10
# vpc number MUST match on both peers for this port-channel
VDC-Access-1(config)# interface ethernet 3/10
VDC-Access-1(config-if)# channel-group 10 mode active
Critical vPC Best Practices
Part 3 โ VDC + vPC Interaction
When deploying vPC inside a VDC (e.g., VDC-Access), the vPC domain is scoped to that VDC only. The peer-link and member interfaces must be allocated to the same VDC on both chassis. The keepalive always uses the management VRF in the Admin VDC โ the only VDC with OOB management access.
# From Admin VDC: allocate peer-link interfaces to VDC-Access
N7K-1(config)# vdc VDC-Access
N7K-1(config-vdc)# allocate interface ethernet 3/1-4
# Eth3/1-2 โ vPC peer-link (Po1) Eth3/3-4 โ vPC member ports
# Keepalive config in VDC-Access uses management VRF (owned by Admin VDC)
VDC-Access-1(config)# vpc domain 1
VDC-Access-1(config-vpc-domain)# peer-keepalive destination 10.255.1.2 source 10.255.1.1 vrf management
Part 4 โ Troubleshooting: VDC Issues
VDC Not Coming Up / Interfaces Missing
# Check all VDC states from Admin VDC
N7K-1# show vdc
# Healthy: all VDCs show state "active"
# "suspended" โ VDC was manually suspended or boot-failed
# "failed" โ software fault in the VDC, check logs
# "initializing" โ VDC still booting
# Detailed VDC status including resource allocation
N7K-1# show vdc VDC-Access detail
# Check interface-to-VDC mapping
N7K-1# show vdc membership
# If an expected interface shows in "Admin VDC": re-allocate it
# Move an interface to a VDC (resets interface config)
N7K-1(config)# vdc VDC-Access
N7K-1(config-vdc)# allocate interface ethernet 3/5
# Suspend and resume a VDC for maintenance
N7K-1(config)# vdc VDC-Access suspend
N7K-1(config)# no vdc VDC-Access suspend
Part 5 โ Troubleshooting: vPC Issues
Scenario A โ vPC Peer-Link Down
Symptom: show vpc shows peer-link status: down. Critical failure โ secondary peer suspends all vPC member ports until the peer-link recovers.
# Step 1: Check overall vPC domain health
VDC-Access-1# show vpc
# Healthy output shows:
# vPC domain id : 1
# Peer status : peer adjacency formed ok
# vPC keep-alive status : peer is alive
# Dual-Active Detection : No dual-active event detected
# Peer-link status : Up
# Step 2: Check peer-link port-channel members
VDC-Access-1# show port-channel summary | include Po1
# Members must show "P" (in-bundle) โ "I" (individual) = LACP negotiation failed
# Step 3: Check physical interfaces
VDC-Access-1# show interface ethernet 3/1 | include line|error|CRC
VDC-Access-1# show interface ethernet 3/2 | include line|error|CRC
# Must be up/up with no CRC errors
# Step 4: Verify LACP neighbor on peer-link
VDC-Access-1# show lacp neighbor interface port-channel 1
# Should show N7K-2 VDC-Access as LACP neighbor
# Step 5: Check consistency parameters (must all pass before peer-link forms)
VDC-Access-1# show vpc consistency-parameters global
# Every parameter must show "Pass"
# Common failures: STP mode mismatch, VLAN trunk mismatch, MTU mismatch
Scenario B โ Keepalive Failing
Symptom: show vpc shows vPC keep-alive status: peer is not reachable. High severity โ if the peer-link also fails while keepalive is down, dual-active split-brain occurs.
# Step 1: Check keepalive configuration and status
VDC-Access-1# show vpc peer-keepalive
# Shows: destination IP, source IP, VRF, last-send and last-rcv timestamps
# Last-rcv timestamp must be recent โ if stale, keepalive is broken
# Step 2: Test reachability from this switch
VDC-Access-1# ping 10.255.1.2 vrf management count 5
# If ping fails: missing route, wrong VRF, or mgmt interface down
# Step 3: Check management interface (lives in Admin VDC)
N7K-1# show interface mgmt0
# Must be up/up with correct IP 10.255.1.1
# Step 4: Verify VRF name (case sensitive)
VDC-Access-1# show running-config vpc | include keepalive
# Confirm: peer-keepalive destination X.X.X.X source Y.Y.Y.Y vrf management
# "management" vs "Management" โ mismatch will silently fail
# Step 5: Check for ACLs blocking UDP 3200
N7K-1# show ip access-lists | include 3200
Scenario C โ vPC Member Port Suspended
Symptom: A vPC port-channel is up on one peer but show vpc on the other shows it as suspended.
# Step 1: List all vPC port-channels and status
VDC-Access-1# show vpc brief
# Look for "suspended" in the vPC port-channel column
# Step 2: Get the suspension reason for a specific vPC
VDC-Access-1# show vpc 10
# Suspension reasons:
# vpc-id-not-matching : vPC number differs between peers โ ensure both use vpc 10
# peer-link-down : secondary suspends all vPC ports when peer-link is down
# no-peer : cannot reach peer via peer-link OR keepalive
# config-not-synced : trunk VLANs, MTU, or STP params differ between peers
# Step 3: Check per-vPC consistency
VDC-Access-1# show vpc consistency-parameters vpc 10
# Every item must show "Pass" โ match any failures on both peers exactly
# Step 4: Verify vPC number matches on both peers
VDC-Access-1# show running-config interface port-channel 10 | include vpc
VDC-Access-2# show running-config interface port-channel 10 | include vpc
# Must show: vpc 10 on both
Scenario D โ Dual-Active / Split-Brain
Symptom: Both peers claim the primary role. All vPC port-channels on the secondary are suspended. Both peers are forwarding independently with the same IPs and MACs โ traffic black holes and duplicates.
# Step 1: Check current vPC role on both peers
VDC-Access-1# show vpc role
# "vPC Role: secondary, operational primary" = this switch took over
# Both showing "primary" or "operational primary" = split-brain active
# Step 2: Check dual-active detection status
VDC-Access-1# show vpc
# Look for: "Dual-Active Detection: Enabled"
# Step 3: Recovery โ ALWAYS restore peer-link first
# Once peer-link is restored, the secondary will:
# 1. Detect the primary via peer-link
# 2. Suspend its vPC ports
# 3. Wait for delay-restore, then re-sync and restore
# Step 4: If a peer reloaded โ manual recovery
VDC-Access-2(config)# vpc domain 1
VDC-Access-2(config-vpc-domain)# shutdown
# Restore peer-link, then:
VDC-Access-2(config-vpc-domain)# no shutdown
Scenario E โ Orphan Ports Losing Connectivity
Symptom: A server connected to only one vPC peer (single-homed) goes down when that peer loses its uplinks.
# Identify all orphan ports on this peer
VDC-Access-1# show vpc orphan-ports
# These ports lose connectivity if this peer fails
# Auto-suspend orphan ports when this peer becomes secondary and loses peer-link
VDC-Access-1(config)# vpc domain 1
VDC-Access-1(config-vpc-domain)# shut orphan-ports suspend
# SLA tracking to shut access port if uplink fails
VDC-Access-1(config)# track 1 interface port-channel 10 line-protocol
VDC-Access-1(config)# interface ethernet 3/20
VDC-Access-1(config-if)# shutdown track 1
# Po10 going down auto-shuts eth3/20 (the orphan port)
Scenario F โ Full vPC Health Check Sequence
## === vPC DOMAIN ===
VDC-Access-1# show vpc
# peer adjacency formed ok | peer is alive | peer-link Up | no dual-active event
## === PEER-LINK ===
VDC-Access-1# show interface port-channel 1 | include members|bundle|Up|Down
VDC-Access-1# show vpc consistency-parameters global
# All parameters: Pass
## === KEEPALIVE ===
VDC-Access-1# show vpc peer-keepalive
# Last-send and Last-rcv must be within seconds of each other
## === VPC MEMBER PORTS ===
VDC-Access-1# show vpc brief
# Every vPC must show "up" in both local and peer port-channel columns
## === MAC AND ARP SYNC ===
VDC-Access-1# show mac address-table | include dynamic | wc -l
VDC-Access-2# show mac address-table | include dynamic | wc -l
# Both peers should have approximately matching MAC count
VDC-Access-1# show ip arp vlan 100 | wc -l
VDC-Access-2# show ip arp vlan 100 | wc -l
# With ip arp synchronize enabled, ARP tables should match
## === ORPHAN PORTS ===
VDC-Access-1# show vpc orphan-ports
## === STP ===
VDC-Access-1# show spanning-tree summary | include Root|Secondary|instances
# Both vPC peers should be Root Bridge for all served VLANs
VDC-Access-1# show spanning-tree vlan 100 | include Root|Forwarding|Blocking
# No Blocking ports โ all ports Forwarding or PortFast
Scenario G โ VDC Resource and Isolation Verification
# Per-VDC resource allocation
N7K-1# show vdc resource
# Check CPU usage inside a specific VDC
N7K-1# switchto vdc VDC-Core
VDC-Core# show processes cpu sort | head 10
# Check for crashed processes in a VDC
VDC-Core# show system internal sysmgr service all | include crashed
# Verify interface-to-VDC allocation (no overlaps)
N7K-1# show vdc membership status
# Each interface must appear exactly once
Common Issues Quick Reference
VDC + vPC Design Checklist
- VDC-Admin reserved for chassis management only โ no production traffic
- Each production function in its own isolated VDC with dedicated interface allocation
- Interface-to-VDC mapping documented โ each physical port maps to exactly one VDC
- vPC peer-link uses minimum 2ร dedicated links in a port-channel โ never shared with data
- vPC keepalive uses management VRF with OOB path โ never traverses the peer-link
peer-gatewayenabled on both vPC peersauto-recovery reload-delay 240enableddelay restore 150configuredip arp synchronizeenabled- Both vPC peers configured as STP root for all VLANs in the vPC domain
- Orphan ports identified, documented, and protected with
shut orphan-ports suspend - vPC consistency check passes on all parameters before go-live
- Keepalive reachability verified with
ping {ip} vrf managementfrom both peers