Multi-Node Deployment¶
Deploy WireBuddy across multiple geographic locations with the Master-Node architecture.
Overview¶
The Master-Node architecture allows you to:
- Deploy WireGuard servers in multiple countries/regions while managing them from a single interface
- Assign peers to specific nodes based on their geographic location
- Scale horizontally by adding more nodes without touching the master
- Centralize management — all configuration happens on the master server
graph TB
subgraph "Master Server"
M[WireBuddy Master<br/>Full Application]
DB[(SQLite Database)]
WEB[Web UI]
M --- DB
M --- WEB
end
subgraph "Node: Frankfurt"
N1[WireBuddy Node<br/>WireGuard Only]
WG1[WireGuard wg0]
N1 --- WG1
end
subgraph "Node: New York"
N2[WireBuddy Node<br/>WireGuard Only]
WG2[WireGuard wg0]
N2 --- WG2
end
subgraph "Node: Tokyo"
N3[WireBuddy Node<br/>WireGuard Only]
WG3[WireGuard wg0]
N3 --- WG3
end
Admin[Admin] --> WEB
N1 -.Sync Config.-> M
N2 -.Sync Config.-> M
N3 -.Sync Config.-> M
C1[Client EU] --> WG1
C2[Client US] --> WG2
C3[Client Asia] --> WG3
style M fill:#4CAF50
style N1 fill:#2196F3
style N2 fill:#2196F3
style N3 fill:#2196F3 Architecture¶
Master Server¶
The master runs the full WireBuddy application including:
- ✅ Web UI and API
- ✅ SQLite database with all configuration
- ✅ User management and authentication
- ✅ Optional: WireGuard interfaces (master can also serve VPN traffic)
- ✅ Optional: DNS resolver (Unbound)
- ✅ Optional: Metrics collection
Node Server¶
A node runs only what's essential for VPN connectivity:
- ✅ WireGuard kernel module and interfaces
- ✅ Lightweight sync daemon (no web server)
- ✅ Self-signed certificate for mutual TLS authentication
- ❌ No database
- ❌ No web UI
- ❌ No DNS resolver
- ❌ No metrics collection
Zero Footprint
Nodes have minimal resource requirements — perfect for small VPS instances.
Security Model¶
Enrollment¶
- Admin creates a node in the master UI → receives a signed enrollment token
- Token is provided to the node via
WIREBUDDY_ENROLLMENT_TOKENenvironment variable - Node generates a self-signed EC P-256 certificate on first boot
- Node enrolls with master using the token and sends its certificate fingerprint
- Token is single-use and expires after successful enrollment
Token format:
Signed with WIREBUDDY_SECRET_KEY — cannot be forged without access to the master.
Mutual Authentication¶
After enrollment, all sync traffic uses mutual certificate authentication:
- Node → Master:
Bearer {api_secret}+X-Client-Cert-Fingerprintheader - Master validates both the API secret hash and certificate fingerprint
- No external PKI required — fully self-contained
Network Security¶
Firewall Configuration
- Master API endpoint (
/api/nodes/*) should be firewall-protected - Only allow node IPs to access the sync endpoints
- Use a VPN tunnel between master and nodes for additional security
Deployment Guide¶
1. Deploy Master Server¶
Standard WireBuddy installation with SERVER_MODE=master (default):
# docker-compose.yml
version: '3.8'
services:
wirebuddy:
image: giiibates/wirebuddy:latest
container_name: wirebuddy-master
network_mode: host
cap_add:
- NET_ADMIN
environment:
- WIREBUDDY_SECRET_KEY=${YOUR_SECRET_KEY}
- SERVER_MODE=master # or omit (master is default)
- LOG_LEVEL=INFO
volumes:
- ./data:/app/data
restart: unless-stopped
2. Create Node in UI¶
- Navigate to Nodes in the sidebar (admin-only)
- Click Add Node
- Fill out the form:
- Name: Display name (e.g., "Frankfurt", "NYC-01") — must be unique
- FQDN/IP: Public address where clients will connect (e.g.,
de.vpn.example.com) — must be unique - WireGuard Port: Node's WireGuard listen port (default:
51820)
Uniqueness Constraints
Both Name and FQDN must be unique across all nodes. Duplicate values will be rejected with a 409 Conflict error.
- Click Create → enrollment token is displayed
Token Display
The token is shown only once. Copy it immediately and store securely.
3. Deploy Node Server¶
Host Prerequisites¶
Before deploying the node container, the host machine must have IP forwarding enabled:
# Enable immediately
sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
# Persist across reboots
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.d/99-wireguard.conf
echo "net.ipv6.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.d/99-wireguard.conf
sudo sysctl --system
Required for VPN Traffic
Without IP forwarding, peers can connect but cannot route internet traffic through the node.
Docker Compose¶
Create a docker-compose.yml on the node machine:
services:
wirebuddy-node:
image: giiibates/wirebuddy:latest
container_name: wirebuddy-node
restart: always
network_mode: host
cap_add:
- NET_ADMIN
environment:
SERVER_MODE: node
WIREBUDDY_ENROLLMENT_TOKEN: "${WIREBUDDY_ENROLLMENT_TOKEN}"
LOG_LEVEL: INFO
TZ: ${TZ:-Etc/UTC}
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
security_opt:
- no-new-privileges:true
devices:
- /dev/net/tun:/dev/net/tun
volumes:
- ./data:/app/data
Environment Variables:
| Variable | Required | Description |
|---|---|---|
SERVER_MODE | Yes | Must be node |
WIREBUDDY_ENROLLMENT_TOKEN | Yes | Token from master UI (contains master URL) |
LOG_LEVEL | No | Logging verbosity (default: INFO) |
TZ | No | Timezone (default: Etc/UTC) |
Master URL in Token
The master URL is embedded in the enrollment token — no separate WIREBUDDY_MASTER_URL variable needed.
Start the node:
4. Verify Enrollment¶
Check node logs:
Expected output:
NODE_DAEMON starting enrollment with master
NODE_DAEMON enrollment successful, node_id=abc123def456
NODE_DAEMON starting sync loop (interval=30s)
NODE_DAEMON heartbeat sent (status=online)
In the master UI, the node status should change from pending → online.
Usage¶
Assigning Peers to Nodes¶
When creating or editing a peer:
- In the Add Peer or Edit Peer modal
- Select the target node from the Node dropdown
- Leave empty for local peers (master's WireGuard interface)
The peer's configuration will include the node's FQDN and port as the Endpoint.
QR Code Generation¶
QR codes automatically reflect the node assignment:
- Local peer: Master's FQDN + master's public key
- Remote peer: Node's FQDN + node interface's public key
Node Badge¶
For peers assigned to a remote node, the QR code image includes a coloured badge showing the node name (e.g., "Frankfurt", "NYC-01"). This makes it easy to identify which VPN exit a configuration targets — especially useful when printing QR codes or managing many devices.
Local Peers
Peers assigned to the master (local) do not show a node badge.
Node Management¶
View Node Status:
- Navigate to Nodes page
- Each node shows:
- Status badge (online/offline/pending/error)
- Last seen timestamp
- WireGuard port
- Number of assigned peers
Edit Node:
- Update name, FQDN, or WireGuard port
- Changes propagate to all peers on that node on next sync
Regenerate Enrollment Token:
- If a node needs to re-enroll (e.g., lost certificate)
- Old token is invalidated
- Deploy new token to node
Delete Node:
- Removes node from database
- All peers on that node are unassigned (node_id set to NULL)
- Node will fail authentication on next sync attempt
Peer Handling
Deleting a node does not delete its peers. Update peer assignments before deletion.
Sync Behavior¶
Heartbeat¶
Nodes send a heartbeat every 30 seconds with:
- Current timestamp
- WireGuard interface status
Master updates last_seen timestamp and sets status=online.
Config Pull¶
Nodes fetch configuration every 30 seconds and compare config_version:
- If version changed → apply config diff
- Only changed interfaces are updated (no service disruption)
- Only changed peers within an interface are updated
Stale Detection¶
Master's scheduled task runs every 60 seconds and marks nodes as offline if:
last_seenis older than 90 seconds
Error Handling¶
Nodes use exponential backoff on sync failures:
- Initial retry: 5 seconds
- Max backoff: 5 minutes
- Backoff resets on successful sync
Configuration Details¶
WireGuard Interfaces on Nodes¶
Nodes create WireGuard interfaces based on master configuration:
- Interface name, IP address, listen port from master
- Keypairs are node-specific and stored in
node_interfacestable - Private keys are Fernet-encrypted in master database
DNS Resolution & Tunneling¶
Nodes do not run their own DNS resolver. Instead, a WireGuard tunnel is automatically created between each node and the master during enrollment:
- Master allocates a tunnel IP for the node on the first WireGuard interface
- Node configures the master as a WireGuard peer with
PersistentKeepalive = 25 - Node peers receive the master's DNS server IP (Unbound) in their client config
- DNS queries from node peers are routed through the WireGuard tunnel back to the master
This ensures:
- Centralised ad-blocking — all peers benefit from the master's blocklists, regardless of which node they connect to
- Centralised DNS logging — all queries appear in the master's DNS log
- No DNS software required on nodes — keeps node footprint minimal
Internet Traffic
Only DNS traffic is tunnelled to the master. Regular internet traffic exits directly through the node's outbound connection.
Metrics & Reliable Delivery¶
Nodes collect WireGuard peer statistics (rx/tx bytes, handshakes) and deliver them to the master using a reliable queue with at-least-once delivery guarantees.
Architecture¶
sequenceDiagram
participant WG as WireGuard
participant Q as Local Queue<br/>(SQLite)
participant N as Node Daemon
participant M as Master API
loop Every 30s
WG->>N: wg show dump
N->>Q: Enqueue metrics (seq 1,2,3...)
N->>M: POST /heartbeat + batch (seq_from=1, seq_to=3)
M->>M: Write to TSDB (skip duplicates via last_metric_seq)
M-->>N: Response: acked_seq=3
N->>Q: DELETE WHERE seq <= 3
end Delivery Guarantees¶
| Guarantee | How |
|---|---|
| At-least-once | Metrics remain in local queue until master ACKs |
| Idempotency | Master tracks last_metric_seq per node, skips duplicates |
| Crash-safe | SQLite WAL mode survives node/master restarts |
| Offline-tolerant | Queue grows up to 10,000 metrics during disconnection |
Queue Configuration¶
| Setting | Value | Description |
|---|---|---|
MAX_QUEUE_SIZE | 10,000 | Oldest metrics dropped on overflow |
MAX_BATCH_SIZE | 500 | Metrics per heartbeat |
| Database | data/metrics_queue.db | SQLite with WAL mode |
Metric Types¶
peer_traffic:rx_bytes,tx_bytesper peer (cumulative counters)peer_handshake:latest_handshake,endpointper peer
TSDB Integration
Metrics from nodes are written to the master's TSDB under the peer's public key, appearing alongside local peer metrics in the dashboard traffic charts.
Troubleshooting¶
Peers Connect But Cannot Browse Internet¶
Symptom: VPN tunnel establishes (handshake OK) but no internet access through node.
Check IP forwarding:
If not enabled, see Host Prerequisites above.
Check iptables FORWARD chain:
The WireGuard rules should be at positions 1 and 2:
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT all -- * wg0 0.0.0.0/0 0.0.0.0/0
2 193 12700 ACCEPT all -- wg0 * 0.0.0.0/0 0.0.0.0/0
UFW Users
WireBuddy uses -I FORWARD 1 to insert rules before UFW. No manual UFW configuration needed.
Node Shows "Offline" in UI¶
Check node logs:
Common issues:
- Network connectivity: Master URL not reachable from node
- Firewall: Master API port blocked
- Certificate mismatch: Delete
node-data/cert.pemandnode-data/key.pem, restart node
Node Fails Enrollment¶
Error: "Invalid enrollment token"
- Token was already used
- Token signature is invalid (mismatch in
WIREBUDDY_SECRET_KEY) - Regenerate token in master UI
Error: "Node ID not found"
- Node was deleted from master after enrollment
- Recreate node and re-deploy with new token
Peer Config Shows Wrong Endpoint¶
Symptoms:
- Peer QR code or config shows master's FQDN instead of node's
Solution:
- Check peer's
node_idassignment in UI - Verify node's FQDN is correct in Nodes page
- Re-download peer config after fixing
DNS Not Working Through Node¶
Symptom: Internet works with 9.9.9.9 but not with the default DNS (10.13.13.1).
DNS queries are routed through the WireGuard tunnel to the master's Unbound resolver. Check the route:
If it shows via eth0, the route is missing. This indicates a sync issue — restart the node:
Config Changes Not Propagating¶
Check:
- Node status is
online(notofflineorerror) - Node logs show successful config pulls:
NODE_DAEMON config applied - Master's
config_versionincremented after peer changes (check logs)
Force sync:
- Restart node:
docker restart wirebuddy-node-frankfurt - Node will fetch latest config on startup
Node Metrics Not Appearing in Dashboard¶
Symptom: Remote peers show "Last seen" but no traffic data in charts.
Check node logs for ACK:
Expected: ACK received: deleted X metrics (up to seq Y)
If no ACK:
- Master endpoint may be timing out
- Check disk space on node (
data/metrics_queue.dbgrows during disconnection)
Check queue stats:
docker exec wirebuddy-node python -c "
from pathlib import Path
from app.node.metrics_queue import init_queue, get_queue_stats, close_queue
conn = init_queue(Path('/app/data'))
print(get_queue_stats(conn))
close_queue(conn)
"
Force flush:
- Restart node to trigger immediate heartbeat + ACK
- Check master TSDB for peer data:
ls data/tsdb/peers/
API Reference¶
Admin Endpoints (Master)¶
POST /api/nodes
GET /api/nodes
GET /api/nodes/{node_id}
PATCH /api/nodes/{node_id}
DELETE /api/nodes/{node_id}
POST /api/nodes/{node_id}/token
Sync Endpoints (Node → Master)¶
See API Reference for full documentation.
Performance Considerations¶
Master Server¶
- No performance impact from nodes (sync traffic is minimal)
- Database grows by ~1 KB per node
- API rate limiting applies to node sync endpoints (configurable)
Node Server¶
- RAM: ~50 MB base + WireGuard kernel module overhead
- CPU: Idle <1%, config sync spikes to ~5% for <1 second
- Network: ~200 bytes/30s for heartbeat + config (if unchanged)
- Disk: ~10 MB (Python runtime + certificate)
Scaling Limits¶
- Tested: 1 master + 10 nodes, 1000 total peers
- Theoretical: 1 master + 100+ nodes (limited by SQLite write contention)
- Recommended: Use read replicas or TSDB for large deployments (>10 nodes)
Future Enhancements¶
Features planned for future releases:
- Node-to-master VPN tunnel with automatic setup
-
Remote peer metrics collection (agent on nodes)Implemented via reliable queue - Health checks with automatic failover
- DNS resolver on nodes (optional)
- Multi-master with Raft consensus
- Web-based node monitoring dashboard
FAQ¶
Can a node also be a master?
No. A server runs either as master or node, not both. However, a master can have local WireGuard interfaces serving peers directly.
Can peers connect to multiple nodes?
No. Each peer is assigned to exactly one node. For multi-path, configure separate peers.
What happens if the master goes down?
Nodes continue serving VPN traffic with their last known config. Re-sync resumes when master is back online. No auth/config changes are possible while master is down.
Can I use Let's Encrypt on nodes?
Not needed. Nodes use self-signed certificates for master authentication only. Client-facing certificates (if any) should be provisioned separately.
How do I migrate a peer from one node to another?
Edit the peer in the UI and change the Node dropdown. The peer's config regenerates with the new endpoint.
Can I run a node without Docker?
Yes. Set SERVER_MODE=node and the enrollment variables in .env, then run python run.py. Requires Python 3.11+ and WireGuard installed.