How I set up an external monitoring system with Zabbix to be alerted even if my Proxmox cluster goes down completely, using a local proxy, a remote VPS server, and a PSK encrypted connection.
<!--truncate-->
## The monitoring paradox: monitoring the monitor
When building a homelab, you quickly install a monitoring system. It's essential: it lets you keep an eye on CPU usage, RAM, disk space, and get alerted before a service crashes.
I was using **Beszel** until now. A simple, lightweight, efficient tool. Perfect for a homelab. Everything works great.
Except there's a problem.
**If my Proxmox cluster goes down, Beszel goes down with it.**
And therefore, my notifications go down too. I'll never be notified that my services are down, since the system supposed to warn me is itself out of service.
### Problematic scenarios
Here are some scenarios where my current monitoring becomes useless:
- **Power outage**: No cluster → No monitoring → No alert
- **Main node crash**: The one hosting Beszel → Radio silence
- **Network issue**: The switch dies → Unable to communicate with monitoring
- **Storage corruption**: Linstor DRBD hosting the VMs becomes inaccessible → Nothing works
In all these cases, I'm **never notified**. I discover the problem hours (or days) later when I try to access a service.
For a personal homelab, it's annoying. For critical infrastructure, it's unacceptable.
## The solution: a complementary approach
Rather than replacing Beszel, I decided to implement a **complementary architecture**:
- **Beszel stays in place** for real-time monitoring of VMs and LXCs on a daily basis. It's simple, lightweight, and perfect for monitoring resource usage in real-time.
- **Zabbix complements it** for global Proxmox cluster monitoring, long-term history, and especially **critical alerts** (like complete cluster failure).
This approach combines the best of both worlds: Beszel's simplicity for daily use, and Zabbix's resilience for critical situations.
### Offsite architecture with distributed monitoring
To solve the resilience problem, I need an architecture that respects three constraints:
### 1. The monitoring server must be **elsewhere**
If my cluster goes down, the monitoring server must remain operational. The solution: host it on a **VPS**, completely independent from my homelab.
Even if all my home infrastructure goes down, the VPS server continues to run and can send me an alert.
### 2. No open ports on the homelab
I **don't** want to open inbound ports on my local network. This increases the attack surface and security risks.
I want an architecture where:
- The central server (VPS) listens on a port
- A **local proxy** (in my homelab) collects data and **pushes** it to the server
- The connection is **initiated from inside** (no NAT port opening)
The proxy contacts the server, not the other way around. This way, no need for VPN or port forwarding.
### 3. Encrypted communication
Monitoring metrics can reveal sensitive information:
- How many servers I have
- Which services are running
- When I'm away (no activity)
I want communication between the proxy and server to be **end-to-end encrypted**, with a **Pre-Shared Key (PSK)** to prevent any interception or identity spoofing.
## Zabbix: the solution that checks all boxes
After evaluating several solutions (Prometheus + Grafana, Netdata, InfluxDB + Telegraf), I chose **Zabbix** for several reasons:
- **Native proxy architecture**: Zabbix was designed from the start to handle proxies that collect locally and send to the central server
- **Active/passive mode**: The proxy can push (active) or be queried (passive)
- **Integrated PSK encryption**: No need to add a VPN tunnel or reverse proxy
- **Proxmox VE template**: Native support for Proxmox REST API
- **Complete interface**: Dashboards, alerts, notifications, graphs... everything is included
- **Mature solution**: Used in enterprises for years, abundant documentation
To generate a strong and secure password for your PostgreSQL database, you can use the following OpenSSL command:
```bash
openssl rand -base64 32
```
This command generates a random 32-byte string encoded in base64, producing an extremely robust ~44 character password. Then replace the `REPLACEME` values in the `.env` file with this generated password.
:::
**Important points**:
- The `zabbix-tier` network is **internal**: the database is not accessible from outside
- The Zabbix server exposes port **10051** to receive data from proxies
- The web interface is accessible only via **Cloudflare Tunnel** (no exposed public IP)
**Deployment**:
```bash
docker compose up -d
```
After a few seconds, the Zabbix interface is accessible. Default login: `Admin` / `zabbix` (change immediately!).
### Step 2: Zabbix Proxy in LXC
I created a Debian 13 LXC container on the Proxmox cluster to host the proxy.
- Set up **notifications** (email, Telegram, Discord...)
- Add other **Zabbix agents** on my VMs and LXCs
- Create **custom dashboards** for an overview
- Monitor other services (databases, web servers, etc.)
If my cluster goes down, I now receive an immediate notification instead of discovering the problem several hours later.
## Conclusion
With this complementary architecture, I now benefit from the best of both worlds:
- **Beszel** for daily monitoring, simple and efficient, with real-time view of my VMs and LXCs
- **Zabbix** for global cluster monitoring, long-term history, and critical alerts that work even if my entire homelab goes down
This approach allows me to keep Beszel's simplicity for daily use while having offsite monitoring resilience for critical situations.

If you have a homelab, setting up offsite monitoring can be a good solution to quickly detect problems, even in case of complete failure of your local infrastructure.
How do you manage monitoring of your infrastructure? Feel free to share your solutions in the comments!