README
¶
Integration Tests
This directory contains integration tests for Neuwerk using isolated network namespaces.
Requirements
- Linux: Tests use Linux network namespaces for isolation
- Root privileges: Required for network namespace operations and eBPF
- Neuwerk binary: Built in
bin/neuwerkor available in PATH
Test Structure
Test Environment (testenv/)
setup.go: Network namespace setup and veth pair configurationdns_mock.go: Mock DNS server for controlled DNS responsesclient.go: Test client utilities for network operationsneuwerk.go: Neuwerk instance manager for starting/stopping instances
Test Fixtures (fixtures/)
policy-static.yaml: Static IP and DNS policies for testingpolicy-dns-only.yaml: DNS-only policiespolicy-updated.yaml: Updated policies for testing policy changes
Test Suites
single_node_test.go: Single-node integration testsha_multi_node_test.go: HA multi-node integration tests
Running Tests
Prerequisites
-
Build Neuwerk:
make build -
Ensure BPF filesystem is mounted:
sudo mount -t bpf none /sys/fs/bpf
Single-Node Tests
# Run all single-node tests
sudo make test.integration.single
# Or using ginkgo directly
sudo ginkgo run -v --label-filter="single-node" ./integration
HA Multi-Node Tests
# Run all HA tests
sudo make test.integration.ha
# Or using ginkgo directly
sudo ginkgo run -v --label-filter="ha" ./integration
All Integration Tests
# Run all integration tests
sudo make test.integration
# Or using ginkgo directly
sudo ginkgo run -v --label-filter="integration" ./integration
Test Coverage
Single-Node Tests
-
Static IP Policy: Tests static IP allow-listing
- Allowed IP and port combinations
- Blocked traffic (wrong port, non-allowed IP)
-
DNS-Based Policy: Tests DNS-driven policy updates
- DNS resolution and map updates
- Wildcard hostname patterns
- DNS failure handling
-
Latency Tests: Performance measurements
- Packet processing latency (P50, P95, P99)
- DNS resolution latency
-
Edge Cases: Concurrent operations and edge cases
- Concurrent DNS resolutions
- Multiple simultaneous connections
HA Multi-Node Tests
-
Distributed DNS State: Tests DNS state propagation
- DNS resolution on one node propagates to all
- Traffic allowed from any node after DNS resolution
-
Node Failure: Tests cluster resilience
- Coordinator failure and re-election
- Non-coordinator node failure
-
Concurrent Operations: Tests under load
- Concurrent DNS resolutions across nodes
- Concurrent traffic from multiple nodes
How It Works
Single-Node Network Architecture
Tests use Linux network namespaces to create isolated network environments:
┌─────────────────────────────────────┐
│ Client Namespace │
│ - Client IP: 10.100.1.100 │
│ - veth: cli* <-> nwi* │
│ - Default route via 10.100.1.10 │
└─────────────────────────────────────┘
│
│ veth pair
│
┌─────────────────────────────────────┐
│ Neuwerk Namespace │
│ - Ingress IP: 10.100.1.10 │
│ - Egress IP: 10.100.2.10 │
│ - veth: nwi* (ingress) │
│ - veth: nwe* (egress) │
│ - eBPF program attached │
│ - Default route via 10.100.2.100 │
└─────────────────────────────────────┘
│
│ veth pair
│
┌─────────────────────────────────────┐
│ Upstream Namespace │
│ - Upstream IP: 10.100.2.100 │
│ - Mock DNS server (port 53) │
│ - Mock HTTP server (192.0.2.100) │
│ - veth: ups* │
└─────────────────────────────────────┘
Multi-Node (HA) Network Architecture
For HA testing, multiple Neuwerk nodes share a common upstream namespace connected via a Linux bridge. Each node has unique IPs to avoid conflicts:
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Client NS 0 │ │ Client NS 1 │ │ Client NS 2 │
│ IP: 10.100.1.100│ │ IP: 10.100.1.101│ │ IP: 10.100.1.102│
└────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘
│ veth │ veth │ veth
│ │ │
┌────────┴─────────┐ ┌────────┴─────────┐ ┌────────┴─────────┐
│ Neuwerk NS 0 │ │ Neuwerk NS 1 │ │ Neuwerk NS 2 │
│ Ingress: │ │ Ingress: │ │ Ingress: │
│ 10.100.1.10 │ │ 10.100.1.11 │ │ 10.100.1.12 │
│ Egress: │ │ Egress: │ │ Egress: │
│ 10.100.2.10 │ │ 10.100.2.11 │ │ 10.100.2.12 │
│ Mgmt veth for │ │ Mgmt veth for │ │ Mgmt veth for │
│ HA cluster │ │ HA cluster │ │ HA cluster │
└────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘
│ veth (ups*) │ veth (ups*) │ veth (ups*)
│ │ │
└─────────────────────┼─────────────────────┘
│
┌────────────────┴────────────────┐
│ Shared Upstream Namespace │
│ │
│ ┌──────────────────────────┐ │
│ │ Linux Bridge (ubr*) │ │
│ │ - IP: 10.100.2.100/24 │ │
│ │ - IP: 192.0.2.100/32 │ │
│ │ (test destination) │ │
│ │ │ │
│ │ Ports: ups0, ups1, ups2 │ │
│ └──────────────────────────┘ │
│ │
│ Mock DNS Server (0.0.0.0:53) │
│ Mock HTTP Server (192.0.2.100) │
│ │
│ Routes: │
│ - 10.100.1.100/32 via 10.100.2.10│
│ - 10.100.1.101/32 via 10.100.2.11│
│ - 10.100.1.102/32 via 10.100.2.12│
└─────────────────────────────────┘
┌─────────────────────────────────┐
│ Management Bridge (mbr*) │
│ (connects all Neuwerk nodes │
│ for NATS cluster comms) │
│ │
│ Ports: mgt0, mgt1, mgt2 │
└─────────────────────────────────┘
Multi-Node IP Assignments
| Node | Client IP | Ingress IP | Egress IP | Mgmt Port |
|---|---|---|---|---|
| 0 | 10.100.1.100 | 10.100.1.10 | 10.100.2.10 | 3322 |
| 1 | 10.100.1.101 | 10.100.1.11 | 10.100.2.11 | 3322 |
| 2 | 10.100.1.102 | 10.100.1.12 | 10.100.2.12 | 3322 |
Key Implementation Details
-
Upstream Bridge: All egress veths (
ups*) are attached to a Linux bridge in the shared upstream namespace. This provides L2 connectivity between all nodes and the mock servers. -
Bridge IPs: The bridge has two IPs:
10.100.2.100/24: Gateway for Neuwerk egress traffic192.0.2.100/32: Test destination IP (RFC 5737 TEST-NET-1)
-
Routing:
- Each Neuwerk NS has a default route via
10.100.2.100 - Upstream NS has /32 routes to each client IP via respective egress IPs
- This enables return traffic to reach the correct node
- Each Neuwerk NS has a default route via
-
Management Network: A separate bridge connects all Neuwerk nodes for NATS cluster communication (cluster routing on port 3322, client port on 3320).
-
Mock Servers:
- DNS server binds to
0.0.0.0:53in upstream NS (reachable from any node) - HTTP server binds to
192.0.2.100:443for connection tests
- DNS server binds to
Test Flow
- Setup: Create network namespaces and veth pairs
- Start Services: Start mock DNS server and Neuwerk instances
- Run Tests: Execute test scenarios
- Cleanup: Remove namespaces and resources
Mock DNS Server
The mock DNS server provides controlled DNS responses:
- Pre-configured A records
- Configurable response delays for latency testing
- Wildcard pattern matching
Troubleshooting
Tests Fail with "permission denied"
Ensure you're running tests as root:
sudo make test.integration
Tests Fail with "network namespace not found"
Network namespaces may not be cleaned up properly. Try:
# Clean up orphaned namespaces
sudo ip netns list | xargs -I {} sudo ip netns delete {}
Tests Fail with "BPF filesystem not mounted"
Mount the BPF filesystem:
sudo mount -t bpf none /sys/fs/bpf
Neuwerk Binary Not Found
Build the binary first:
make build
Or ensure it's in your PATH.
Limitations
- Linux Only: Network namespaces are Linux-specific
- Root Required: Network operations require root privileges
- No VM Isolation: Tests run on the host system (using namespaces for isolation)
- Process Namespace: Neuwerk instances run in separate network namespaces but share process namespace
Future Improvements
- Add support for TCP DNS server testing
- Implement connection state tracking tests
- Add load testing scenarios
- Support for IPv6 testing
- Integration with CI/CD pipelines