3 Database upgrade to primary keys

Overview

This section provides instructions for manually upgrading tables in existing installations to primary keys.

Upgrading to primary keys optimizes how data is indexed and accessed, which may speed up queries and save space. It also improves data management and synchronization in clustered setups, helping with scaling and ensuring the system remains reliable even if some servers fail.

The instructions provided on this page are designed for advanced users and may need to be adjusted for your specific configuration. Upgrading to primary keys can be time-consuming and resource-intensive. Ensure that enough free disk space is available; depending on your database size and stored data, the process may require up to 2.5 times the space currently used by history tables.

Primary keys are used for all tables in new installations since Zabbix 6.0.

There is no automatic database upgrade to primary keys; however, existing installations may be upgraded manually after upgrading Zabbix server to 6.0 or newer.

Since Zabbix 7.0, upgrading tables to primary keys also upgrades tables to use double precision data types.

If you have Zabbix 7.0, tables already use double precision. However, the instructions on this page can still be followed to upgrade tables to primary keys without affecting tables that already use double precision.

If you have Zabbix 6.4 (or earlier), consider upgrading tables to double precision first. For more information, see Upgrading to numeric values of extended range.

Instructions are available for:

Important notes

To perform the database upgrade:

  1. Stop Zabbix server.

Stopping Zabbix server for the time of the upgrade is strongly recommended. However, if absolutely necessary, you can perform the upgrade while the server is running (only for MySQL, MariaDB, and PostgreSQL without TimescaleDB).

  1. Back up your database.

If using PostgreSQL/TimescaleDB, see known issues for restoring a PostgreSQL/TimescaleDB backup created in Zabbix 7.0.0-7.0.4.

  1. Run the scripts for your database.
  2. Start Zabbix server.

Run the scripts for the server database only. The proxy will not benefit from this upgrade.

If the database uses partitions, contact the DB administrator or Zabbix Support for help.

CSV files can be removed after a successful upgrade to primary keys.

Optionally, Zabbix frontend may be switched to maintenance mode.

MySQL

Export and import must be performed in tmux/screen to ensure that the session isn't dropped.

See also: Important notes

MySQL 8.0+ with mysqlsh

This method can be used with a running Zabbix server, but it is recommended to stop the server for the time of the upgrade. The MySQL Shell (mysqlsh) must be installed and able to connect to the DB.

  • Log in to MySQL console as root (recommended) or as any user with FILE privileges.

  • Start MySQL with local_infile variable enabled.

  • Rename old tables and create new tables by running history_upgrade_prepare.sql.

mysql -uzabbix -p<password> zabbix < /usr/share/zabbix-sql-scripts/mysql/option-patches/history_upgrade_prepare.sql
Copy
✔ Copied
  • Export and import data.

Connect via mysqlsh. If using a socket connection, specifying the path might be required.

sudo mysqlsh -uroot -S /run/mysqld/mysqld.sock --no-password -Dzabbix
Copy
✔ Copied

Run (CSVPATH can be changed as needed):

CSVPATH="/var/lib/mysql-files";
       
       util.exportTable("history_old", CSVPATH + "/history.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history.csv", {"dialect": "csv", "table": "history" });
       
       util.exportTable("history_uint_old", CSVPATH + "/history_uint.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_uint.csv", {"dialect": "csv", "table": "history_uint" });
       
       util.exportTable("history_str_old", CSVPATH + "/history_str.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_str.csv", {"dialect": "csv", "table": "history_str" });
       
       util.exportTable("history_log_old", CSVPATH + "/history_log.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_log.csv", {"dialect": "csv", "table": "history_log" });
       
       util.exportTable("history_text_old", CSVPATH + "/history_text.csv", { dialect: "csv" });
       util.importTable(CSVPATH + "/history_text.csv", {"dialect": "csv", "table": "history_text" });
Copy
✔ Copied

MariaDB/MySQL 8.0+ without mysqlsh

This upgrade method takes more time and should be used only if an upgrade with mysqlsh is not possible.

Table upgrade
  • Log in to MySQL console as root (recommended) or any user with FILE privileges.

  • Start MySQL with local_infile variable enabled.

  • Rename old tables and create new tables by running history_upgrade_prepare.sql:

mysql -uzabbix -p<password> zabbix < /usr/share/zabbix-sql-scripts/mysql/option-patches/history_upgrade_prepare.sql
Copy
✔ Copied
Migration with stopped server

max_execution_time must be disabled before migrating data to avoid timeout during migration.

SET @@max_execution_time=0;
       
       INSERT IGNORE INTO history SELECT * FROM history_old;
       INSERT IGNORE INTO history_uint SELECT * FROM history_uint_old;
       INSERT IGNORE INTO history_str SELECT * FROM history_str_old;
       INSERT IGNORE INTO history_log SELECT * FROM history_log_old;
       INSERT IGNORE INTO history_text SELECT * FROM history_text_old;
Copy
✔ Copied

Follow post-migration instructions to drop the old tables.

Migration with running server

Check for which paths import/export is enabled:

mysql> SELECT @@secure_file_priv;
       +-----------------------+
       | @@secure_file_priv    |
       +-----------------------+
       | /var/lib/mysql-files/ |
       +-----------------------+
Copy
✔ Copied

If secure_file_priv value is a path to a directory, export/import will be performed for files in that directory. In this case, edit paths to files in queries accordingly or set the secure_file_priv value to an empty string for the upgrade time.

If secure_file_priv value is empty, export/import can be performed from any location.

If secure_file_priv value is NULL, set it to the path that contains exported table data ('/var/lib/mysql-files/' in the example above).

For more information, see MySQL documentation.

max_execution_time must be disabled before exporting data to avoid timeout during export.

SET @@max_execution_time=0;
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_uint_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_str_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_log_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
       
       SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n' FROM history_text_old;
       LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TERMINATED BY '\n';
Copy
✔ Copied

Follow post-migration instructions to drop the old tables.

PostgreSQL

Export and import must be performed in tmux/screen to ensure that the session isn't dropped. For installations with TimescaleDB, skip this section and proceed to PostgreSQL + TimescaleDB.

See also: Important notes

Table upgrade

  • Rename tables using history_upgrade_prepare.sql:
sudo -u zabbix psql zabbix < /usr/share/zabbix-sql-scripts/postgresql/option-patches/history_upgrade_prepare.sql
Copy
✔ Copied

Migration with stopped server

  • Export current history, import it to the temp table, then insert the data into new tables while ignoring duplicates:
INSERT INTO history SELECT * FROM history_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_uint SELECT * FROM history_uint_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_str SELECT * FROM history_str_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_log SELECT * FROM history_log_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       INSERT INTO history_text SELECT * FROM history_text_old ON CONFLICT (itemid,clock,ns) DO NOTHING;
Copy
✔ Copied

See tips for improving INSERT performance: PostgreSQL: Bulk Loading Huge Amounts of Data, Checkpoint Distance and Amount of WAL.

Migration with running server

  • Export current history, import it to the temp table, then insert the data into new tables while ignoring duplicates:
\copy history_old TO '/tmp/history.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    DOUBLE PRECISION DEFAULT '0.0000'          NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history FROM '/tmp/history.csv' DELIMITER ',' CSV
       INSERT INTO history SELECT * FROM temp_history ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_uint_old TO '/tmp/history_uint.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_uint (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    numeric(20)     DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
       INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_str_old TO '/tmp/history_str.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_str (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    varchar(255)    DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_str FROM '/tmp/history_str.csv' DELIMITER ',' CSV
       INSERT INTO history_str (itemid,clock,value,ns) SELECT * FROM temp_history_str ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_log_old TO '/tmp/history_log.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_log (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           timestamp                integer         DEFAULT '0'               NOT NULL,
           source                   varchar(64)     DEFAULT ''                NOT NULL,
           severity                 integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           logeventid               integer         DEFAULT '0'               NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_log FROM '/tmp/history_log.csv' DELIMITER ',' CSV
       INSERT INTO history_log SELECT * FROM temp_history_log ON CONFLICT (itemid,clock,ns) DO NOTHING;
       
       \copy history_text_old TO '/tmp/history_text.csv' DELIMITER ',' CSV
       CREATE TEMP TABLE temp_history_text (
           itemid                   bigint                                    NOT NULL,
           clock                    integer         DEFAULT '0'               NOT NULL,
           value                    text            DEFAULT ''                NOT NULL,
           ns                       integer         DEFAULT '0'               NOT NULL
       );
       \copy temp_history_text FROM '/tmp/history_text.csv' DELIMITER ',' CSV
       INSERT INTO history_text SELECT * FROM temp_history_text ON CONFLICT (itemid,clock,ns) DO NOTHING;
Copy
✔ Copied

PostgreSQL + TimescaleDB

Export and import must be performed in tmux/screen to ensure that the session isn't dropped. Zabbix server should be down during the upgrade.

See also: Important notes

  • Rename tables using history_upgrade_prepare.sql.
    • If compression is enabled (on default installation), run the script from /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression:

      cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_upgrade_prepare.sql | sudo -u zabbix psql zabbix
      Copy
      ✔ Copied
    • If compression is disabled, run the script from /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression:

      cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/history_upgrade_prepare.sql | sudo -u zabbix psql zabbix
      Copy
      ✔ Copied
  • Run TimescaleDB hypertable migration scripts based on compression settings:
    • If compression is enabled (on default installation), run scripts from /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression:

      cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_upgrade.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_upgrade_uint.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_upgrade_log.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_upgrade_str.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_upgrade_text.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/trends_upgrade.sql | sudo -u zabbix psql zabbix
      Copy
      ✔ Copied
    • If compression is disabled, run scripts from /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression:

      cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/history_upgrade.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/history_upgrade_uint.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/history_upgrade_log.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/history_upgrade_str.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/history_upgrade_text.sql | sudo -u zabbix psql zabbix
             cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/trends_upgrade.sql | sudo -u zabbix psql zabbix
      Copy
      ✔ Copied

See also: Tips for improving INSERT performance.

Oracle

The support for Oracle DB is deprecated since Zabbix 7.0.

Export and import must be performed in tmux/screen to ensure that the session isn't dropped. Zabbix server should be down during the upgrade.

See also: Important notes

Table upgrade

See Oracle Data Pump documentation for performance tips.

  • Rename tables using history_pk_prepare.sql.
cd /usr/share/zabbix/zabbix-sql-scripts/database/oracle
       sqlplus zabbix/password@oracle_host/service
       sqlplus> @history_pk_prepare.sql
Copy
✔ Copied

Batch migration of history tables

  • Prepare directories for Data Pump.

Data Pump must have read and write permissions to these directories.

Example:

mkdir -pv /export/history
       chown -R oracle:oracle /export
Copy
✔ Copied
  • Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication ('zabbix' in the example below). Under sysdba role, run:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;
Copy
✔ Copied
  • Export tables. Replace N with the desired thread count.
expdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
           PARALLEL=N
Copy
✔ Copied
  • Import tables. Replace N with the desired thread count.
impdp zabbix/password@oracle_host/service \
           DIRECTORY=history \
           TABLES=history_uint_old \
        REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old:history_log,history_text_old:history_text \
           data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND  PARALLEL=N CONTENT=data_only
Copy
✔ Copied

Individual migration of history tables

  • Prepare directories for Data Pump for each history table. Data Pump must have read and write permissions to these directories.

Example:

mkdir -pv /export/history /export/history_uint /export/history_str /export/history_log /export/history_text
       chown -R oracle:oracle /export
Copy
✔ Copied
  • Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication ('zabbix' in the example below). Under sysdba role, run:
create directory history as '/export/history';
       grant read,write on directory history to zabbix;
       
       create directory history_uint as '/export/history_uint';
       grant read,write on directory history_uint to zabbix;
       
       create directory history_str as '/export/history_str';
       grant read,write on directory history_str to zabbix;
       
       create directory history_log as '/export/history_log';
       grant read,write on directory history_log to zabbix;
       
       create directory history_text as '/export/history_text';
       grant read,write on directory history_text to zabbix;
Copy
✔ Copied
  • Export and import each table. Replace N with the desired thread count.
expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history TABLES=history_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history TABLES=history_old REMAP_TABLE=history_old:history data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_uint TABLES=history_uint_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_uint TABLES=history_uint_old REMAP_TABLE=history_uint_old:history_uint data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_str TABLES=history_str_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_str TABLES=history_str_old REMAP_TABLE=history_str_old:history_str data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_log TABLES=history_log_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_log TABLES=history_log_old REMAP_TABLE=history_log_old:history_log data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
       
       expdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_text TABLES=history_text_old PARALLEL=N
       
       impdp zabbix/password@oracle_host:1521/xe DIRECTORY=history_text TABLES=history_text_old REMAP_TABLE=history_text_old:history_text data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
Copy
✔ Copied

Post-migration

For all databases, once the migration is completed, do the following:

  • Verify that everything works as expected.

  • Drop old tables:

DROP TABLE history_old;
       DROP TABLE history_uint_old;
       DROP TABLE history_str_old;
       DROP TABLE history_log_old;
       DROP TABLE history_text_old;
Copy
✔ Copied
  • For TimescaleDB, also drop the following old table:
DROP TABLE trends_old;
Copy
✔ Copied

See also

To toggle search highlight, press Ctrl+Alt+H
Have an improvement suggestion for this page? Select the text that could be improved and press Ctrl+Enter to send it to the editors.