Recently, I had to migrate three zabbix instances to TimescaleDB for better performance as the databases kept growing, hence, I decided to write a blog on the procedure I took and challenges I faced. Zabbix’s reliance on a relational database for storing time-series metric data is a classic architectural challenge. As history and trend tables grow into the billions of rows, query performance inevitably degrades. The solution? Augmenting PostgreSQL with TimescaleDB. By leveraging hypertables for automatic partitioning and native compression, you can drastically improve data ingestion, query speed, and storage efficiency. In this post, we’ll detail the technical steps for a seamless migration from a vanilla PostgreSQL backend to TimescaleDB.

Understanding TimescaleDB: The “What” and the “Why”

Before jumping to the procedure we need to understand what is TimescaleDB and why it’s uniquely suited to solve the performance challenges inherent in a large-scale Zabbix deployment.

What is TimescaleDB?

At its core, TimescaleDB is not a new database; it’s a powerful extension for PostgreSQL. This is its most significant architectural advantage. It transforms your standard PostgreSQL instance into a time-series powerhouse without forcing you to abandon the reliability, familiarity, and rich ecosystem of the world’s most advanced open-source relational database.

The magic behind TimescaleDB lies in its core abstraction: the hypertable. When you convert a Zabbix table like history_uint into a hypertable, it still appears and behaves as a single, continuous table to you and to the Zabbix application. Under the hood, however, TimescaleDB automatically partitions this data into many smaller child tables, called chunks, based on a time interval you define.

For a Zabbix workload, where every query is bound by a time range, this is a game-changer. When you request data for the last hour, the PostgreSQL query planner, guided by TimescaleDB, instantly knows it only needs to scan one or two small chunks instead of sifting through a monolithic table containing billions of rows. This process, known as chunk pruning, is the primary driver of TimescaleDB’s massive query performance gains.

Why Use It for Zabbix? The Core Advantages

Migrating your Zabbix backend to TimescaleDB addresses the most common performance and scalability pain points directly.

  1. Massive Query Performance Boost: As mentioned, queries will now only be run against temporally relevant data chunks instead of full table scans. This speeds of dashboards, graphs, remote queries, etc. exponentially faster.
  2. Native, Transparent Compression: Ever had to increase the disk space of a Zabbix machine as time passes? The odds are high and one way to manage disk usage is compression. TimescaleDB offers best-in-class columnar compression that can reduce your storage footprint by over 90%. This compression is transparently handled by the database; you can still query the compressed data using standard SQL without any application-level changes. You can even set policies to automatically compress data after a certain age (e.g., compress data older than 7 days)
  3. Simplified Data Lifecycle Management: Zabbix relies on a “housekeeper” process to purge old data, which can be I/O intensive. TimescaleDB replaces this with highly efficient data retention policies. Dropping an old chunk is an instantaneous, metadata-only operation, far more efficient than running a DELETE command on millions of rows.

TimescaleDB is a proven solution for a wide array of time-series workloads across many industries. By choosing TimescaleDB for Zabbix, you are adopting a technology that is purpose-built and battle-tested for the exact type of data workload that monitoring systems generate: time-series.

The Procedure

Before you begin!

  1. Backup Everything: Back up your database using pg_dumpall or volume snapshots to avoid data loss.
  2. Downtime: This process requires stopping your Zabbix services. Zabbix server and frontend must be down during migration.
  3. Compatibility: Check Zabbix documentation for compatible versions of PostgreSQL with your Zabbix version.
  4. Docker Image for TimescaleDB: The standard postgres image doesn’t include TimescaleDB. You have to use an official TimescaleDB image like timescale/timescaledb:latest-pg17. Find a compatible version based on previous step and try not to use latest tag instead pin your configuration to a specific version as a best practice.
  5. Migration Time: Migrating existing history, trends, and audit log data to TimescaleDB hypertables can take significant time for large databases.
  6. Free Disk Space: Make sure you have ample disk space, this procedure needs much of it due to back ups, restores and conversions at time you may need multiples of the size of you DB until migration finishes.

Step 1: Prepare for PostgreSQL Upgrade and TimescaleDB

  1. Backup Your Current Database:
    docker exec -it <postgres_container_name> pg_dumpall -U zabbix > zabbix_backup.sql
    
  2. Stop Your Zabbix Services:
    docker compose down
    
  3. Update Your docker-compose.yaml for PostgreSQL with TimescaleDB: First update your docker compose file to use the TimescaleDB image as below; Your setup has other parameters too like username, password, etc. that have been omitted here for brevity. Start just the postgres-server service first: docker-compose up -d <postgres_container_name>
    services:
      postgres-server:
        image: timescale/timescaledb-ha:pg17-latest # Change to appropriate compatible tag
        volumes:
          # Use a NEW volume for the upgraded DB to avoid conflicts; migrate data later
          - postgres-data-new:/var/lib/postgresql/data
        environment:
          POSTGRES_INITDB_ARGS: '--data-checksums'
    

Step 2: Migrate Data to the New PostgreSQL Version

  1. Restore the Backup to the New Database: This performs a logical restore, which is recommended for major version upgrades. If you encounter errors (e.g., due to version differences), use pg_upgrade inside the container (more complex; see PostgreSQL docs).
    docker exec -i <postgres_container_name> psql -U zabbix -d zabbix < zabbix_backup.sql
    
  2. Verify the Database: Exec into the container and check:
    docker exec -it <postgres_container_name> psql -U zabbix -d zabbix -c "\dt"
    

Step 3: Enable and Configure TimescaleDB

  • Enable the TimescaleDB Extension:

    docker exec -it <postgres_container_name> psql -U zabbix -d zabbix -c "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;"
    
  • Extract the TimescaleDB Schema Script: The script is located inside the Zabbix server image, extract this using the following command. You may need to adjust the path based on your Zabbix image version, the path above has been tested on Zabbix 7.0.x and 7.4.

    docker compose run --rm <zabbix_server_container_name> cat /usr/share/doc/zabbix-server-postgresql/timescaledb.sql > timescaledb.sql
    
  • Run the TimescaleDB Schema Script: This step takes a bit of time especially for big databases and uses the most disk space in the meantime based on my experience.

    Please ignore warning messages informing that the best practices are not followed while running schema.sql script on TimescaleDB version 2.9.0 and higher. Regardless of this warning, the configuration will be completed successfully.

    The migration of existing history, trends and audit log data may take a lot of time. Zabbix server and frontend must be down for the period of migration.

    docker exec -i <postgres_container_name> psql -U zabbix -d zabbix < timescaledb.sql
    
  • Tune PostgreSQL (Optional): Run the timescaledb-tune tool inside the container to optimize postgresql.conf:

    docker exec -it <postgres_container_name> timescaledb-tune
    

Step 4: Start Zabbix and Verify

  • Start All Services::

    docker compose up -d
    
  • Upgrade TimescaleDB Schema if Needed (for Zabbix Upgrades): If upgrading Zabbix later (e.g., to 7.4), start Zabbix server to apply DB upgrades, check logs for completion and then stop it and re-run the timescaledb.sql script as per the “Upgrading TimescaleDB schema” section.

  • Configure Compression and Housekeeping:

    • In Zabbix frontend: Go to Administration → Housekeeping.
    • Enable Override item history period and Override item trend period for partitioned housekeeping if they are not enabled.
    • Set Enable compression and Compress records older than (minimum 7d). Changes take effect within 2x HousekeepingFrequency hours.

    Refer to Zabbix documentation for more details

  • Verify:

    • Check Zabbix server logs for warnings (e.g., compression issues).
    • In frontend, check Administration → Housekeeping and System information for configuration warnings.
    • Test queries in psql to confirm hypertables:
      docker exec -it <postgres_container_name> psql -U zabbix -d zabbix -c "\d+ history"
      
  • Update Volumes: Once migration is complete, you can remove the old volume if it is not needed and also follow a procedure to rename the new volume.