Overview
F5 BIG-IP LTM (Local Traffic Manager) is the industry standard for enterprise application delivery โ sitting between clients and servers to load balance traffic, offload SSL, persist sessions, and provide deep application health monitoring. When it works correctly it is invisible. When it breaks, applications go down and the troubleshooting path requires understanding the full traffic flow from virtual server through profile processing to pool member selection. This guide covers the full configuration lifecycle and a structured troubleshooting methodology.
Part 1 โ Virtual Server Configuration
The virtual server is the client-facing IP and port that the BIG-IP listens on. It is the entry point for all traffic processing.
# Create a pool first, then reference it from the virtual server
## TMSH โ BIG-IP command line
# Create pool with round-robin LB and HTTP monitor
admin@BIG-IP# create ltm pool WEB-POOL {
members {
10.1.0.11:8080 { address 10.1.0.11 }
10.1.0.12:8080 { address 10.1.0.12 }
10.1.0.13:8080 { address 10.1.0.13 }
}
load-balancing-mode round-robin
monitor http-app-monitor
}
# Create virtual server โ SSL offload (client-ssl profile on VS, HTTP to pool)
admin@BIG-IP# create ltm virtual VS-WEB-443 {
destination 10.0.0.10:443
ip-protocol tcp
pool WEB-POOL
profiles {
clientssl { context clientside }
http { }
tcp { }
cookie-persist { }
}
source-address-translation { type automap }
vlans-enabled
vlans { external }
}
Part 2 โ Health Monitors
Health monitors are what make load balancing intelligent. A pool member that fails its monitor is removed from rotation โ no manual intervention required.
# HTTP monitor โ checks a specific URL for a specific string
admin@BIG-IP# create ltm monitor http http-app-monitor {
interval 5
timeout 16
send "GET /health HTTP/1.1\r\nHost: app.example.com\r\nConnection: close\r\n\r\n"
recv "status: ok"
recv-disable "status: drain"
}
# HTTPS monitor โ for pools receiving HTTPS (non-offloaded)
admin@BIG-IP# create ltm monitor https https-monitor {
interval 10
timeout 31
send "GET /health HTTP/1.1\r\nHost: app.example.com\r\nConnection: close\r\n\r\n"
recv "200 OK"
ssl-profile /Common/serverssl
}
# TCP monitor โ simple port-open check (use only when HTTP monitor is not possible)
admin@BIG-IP# create ltm monitor tcp tcp-basic {
interval 5
timeout 16
}
Monitor Tuning Guidelines
The timeout value must be greater than interval ร (number of retries). The default is timeout = (interval ร 3) + 1. Do not set interval below 5 seconds in production โ aggressive monitors can overwhelm application health check endpoints.
Part 3 โ Load Balancing Methods
Part 4 โ Persistence Profiles
Persistence ensures a client always returns to the same pool member for the duration of a session โ essential for stateful applications.
# Cookie persistence โ most reliable for HTTP/HTTPS applications
# BIG-IP inserts a cookie that identifies the pool member
admin@BIG-IP# create ltm persistence cookie COOKIE-PERSIST {
cookie-name BIGipServer
expiration 1:0:0
method insert
override disabled
}
# Source IP persistence โ use when you cannot rely on cookies
# (non-HTTP, clients that strip cookies, API clients)
admin@BIG-IP# create ltm persistence source-addr SRCIP-PERSIST {
timeout 180
mask 255.255.255.255
}
# SSL session ID persistence โ for passthrough SSL (no offload)
admin@BIG-IP# create ltm persistence ssl SSL-PERSIST {
timeout 300
}
# Attach persistence to virtual server
admin@BIG-IP# modify ltm virtual VS-WEB-443 {
persist { COOKIE-PERSIST { default yes } }
fallback-persistence SRCIP-PERSIST
}
Part 5 โ SSL Offload Configuration
# Import certificate and key
admin@BIG-IP# install sys crypto cert app-cert.crt from-local-file /var/tmp/app-cert.crt
admin@BIG-IP# install sys crypto key app-cert.key from-local-file /var/tmp/app-cert.key
# Create client-ssl profile (BIG-IP presents this cert to clients)
admin@BIG-IP# create ltm profile client-ssl APP-CLIENT-SSL {
cert app-cert.crt
key app-cert.key
chain intermediate-bundle.crt
ciphers "ECDHE:!NULL:!LOW:!EXPORT:!RC4:!DES:!3DES:!ADH:!ANON"
options { no-sslv2 no-sslv3 no-tlsv1 no-tlsv1-1 }
renegotiation disabled
defaults-from clientssl
}
# Create server-ssl profile (BIG-IP uses this to connect to pool members)
# Only needed if pool members also require HTTPS
admin@BIG-IP# create ltm profile server-ssl APP-SERVER-SSL {
defaults-from serverssl
peer-cert-mode ignore
}
Part 6 โ iRules
iRules are Tcl-based scripts that run inline on traffic flows. Use for logic that cannot be achieved with standard profiles.
# iRule: Redirect HTTP to HTTPS
when HTTP_REQUEST {
HTTP::redirect https://[HTTP::host][HTTP::uri]
}
# iRule: Route to different pool based on URI path
when HTTP_REQUEST {
if { [HTTP::uri] starts_with "/api/" } {
pool API-POOL
} elseif { [HTTP::uri] starts_with "/static/" } {
pool CDN-POOL
} else {
pool WEB-POOL
}
}
# iRule: Add security headers on response
when HTTP_RESPONSE {
HTTP::header insert "Strict-Transport-Security" "max-age=31536000; includeSubDomains"
HTTP::header insert "X-Content-Type-Options" "nosniff"
HTTP::header insert "X-Frame-Options" "DENY"
}
# iRule: Log client IP and request for debugging
when HTTP_REQUEST {
log local0. "Client=[IP::client_addr] Host=[HTTP::host] URI=[HTTP::uri]"
}
Part 7 โ Troubleshooting
Step 1 โ Check Pool Member Status
# Check pool and member status
admin@BIG-IP# show ltm pool WEB-POOL
admin@BIG-IP# show ltm pool WEB-POOL members detail
# Member states: available (green), offline (red), unknown (blue)
# "offline" + "The node is not available" = monitor failing
# "offline" + "pool member has been marked down" = explicit disable
# Force a manual check of monitor
admin@BIG-IP# show ltm pool WEB-POOL members field-fmt | grep -E "addr|monitor-status|state"
Step 2 โ Test Health Monitor Manually
# Simulate the monitor from BIG-IP CLI to the pool member
admin@BIG-IP# curl -sk -H "Host: app.example.com" http://10.1.0.11:8080/health
# Response should match the "recv" string in your monitor definition
# If curl returns what you expect but member is still down, check monitor syntax
# Check monitor configuration
admin@BIG-IP# show ltm monitor http http-app-monitor
Step 3 โ Verify Virtual Server Traffic
# Check virtual server statistics โ packets in/out
admin@BIG-IP# show ltm virtual VS-WEB-443 stats
# Check connection table
admin@BIG-IP# show sys connection ss-server-addr 10.0.0.10 | head -20
# Packet capture on BIG-IP (tcpdump)
# Capture client-side traffic (before processing)
admin@BIG-IP# tcpdump -ni 0.0:nnn -s 0 host 10.0.0.10 and port 443 -w /var/tmp/vs-client.pcap
# Capture server-side traffic (after processing)
admin@BIG-IP# tcpdump -ni 0.0 -s 0 host 10.1.0.11 and port 8080 -w /var/tmp/pool-server.pcap
Step 4 โ SSL Troubleshooting
# Check SSL certificate expiry
admin@BIG-IP# show sys crypto cert app-cert.crt
# Check SSL handshake failures in logs
admin@BIG-IP# grep -i ssl /var/log/ltm | tail -50
# Test SSL from external client
openssl s_client -connect 10.0.0.10:443 -servername app.example.com
# Check cipher negotiation
admin@BIG-IP# show ltm profile client-ssl APP-CLIENT-SSL field-fmt
Quick Reference โ Common Issues
F5 BIG-IP Hardening Checklist
- All management access uses HTTPS and SSH only โ no HTTP or Telnet to management IP
- Management interface is in a dedicated management VLAN, not reachable from production traffic
- SSL profiles disable SSLv2, SSLv3, TLSv1.0, TLSv1.1 โ TLS 1.2 minimum, TLS 1.3 preferred
- Cipher list follows a hardened baseline โ no NULL, EXPORT, RC4, or DES ciphers
- Health monitors use application-level checks (HTTP/HTTPS with recv string) โ not bare TCP
- All virtual servers have a persistence profile with an appropriate fallback method
- iRules are reviewed and approved before deployment โ untested iRules can drop all traffic
- SNAT (automap or SNAT pool) is configured on all virtual servers โ servers must route return traffic through BIG-IP
- Certificate expiry monitoring is in place โ alert at 30 days before expiry
- BIG-IP is running in HA pair (active/standby) with config sync verified after every change
tmsh save sys configis run after every change โ unsaved config is lost on failover