You can backup and restore an entire database or individual hypertables using the native PostgreSQL pg_dump and pg_restore commands. This works even for compressed hypertables, without having to decompress the chunks before you begin.

Try for free on Timescale

Timescale is a fully managed service with automatic backup and restore, high availability with replication, seamless scaling and resizing, and much more. You can try Timescale free for thirty days.

Try for free

Upgrades between different versions of TimescaleDB can be done in place; you don't need to backup and restore your data. See the upgrading instructions.


If you are using this pg_dump backup method regularly, make sure you keep track of which versions of PostgreSQL and TimescaleDB you are running. For more information, see "Versions are mismatched when dumping and restoring a database" in the Troubleshooting section.

You can perform a backup using the pg_dump command at the command prompt. For example, to backup a database named tsdb:

pg_dump -Fc -f tsdb.bak tsdb

To backup a database named tsdb hosted on a remote server:

pg_dump -h <REMOTE_HOST> -p 55555 -U tsdbadmin -Fc -f tsdb.bak tsdb

You might see some errors when running pg_dump. To learn if they can be safely ignored, see the troubleshooting section.


Do not use the pg_dump command to backup individual hypertables. Dumps created using this method lack the necessary information to correctly restore the hypertable from backup.

When you need to restore data from a backup, you can use psql to create a new database and restore the data.

  1. In psql, create a new database to restore to, and connect to it:

    \c tsdb
  2. Run timescaledb_pre_restore to put your database in the right state for restoring:

    SELECT timescaledb_pre_restore();
  3. Restore the database:

    \! pg_restore -Fc -d tsdb tsdb.bak
  4. Run timescaledb_post_restore to return your database to normal operations:

    SELECT timescaledb_post_restore();

Do not use the pg_restore command with -j option. This option does not correctly restore the TimescaleDB catalogs.

The pg_dump command provides flags that allow you to specify tables or schemas to back up. However, using these flags means that the dump lacks necessary information that TimescaleDB requires to understand the relationship between them. Even if you explicitly specify both the hypertable and all of its constituent chunks, the dump would still not contain all the information it needs to recreate the hypertable on restore.


Do not use the pg_dump command to backup individual hypertables. Dumps created using this method lack the necessary information to correctly restore the hypertable from backup.

You can backup individual hypertables by backing up the entire database, and then excluding the tables you do not want to backup. You can also use this method to backup individual plain tables that are not hypertables.

  1. At the command prompt, back up the hypertable schema:

    pg_dump -s -d old_db --table conditions -N _timescaledb_internal | \
    grep -v _timescaledb_internal > schema.sql
  2. Backup the hypertable data to a CSV file:

    psql -d old_db \
    -c "\COPY (SELECT * FROM conditions) TO data.csv DELIMITER ',' CSV"
  1. At the command prompt, restore the schema:

    psql -d new_db < schema.sql
  2. Recreate the hypertables:

    psql -d new_db -c "SELECT create_hypertable('conditions', 'time')"
  3. Restore the data:

    psql -d new_db -c "\COPY conditions FROM data.csv CSV"

    The standard COPY command in PostgreSQL is single threaded. If you have a lot of data, you can speed up the copy using the parallel importer instead.

When you create the new hypertable with the create_hypertable command, you do not need to use the same parameters as existed in the old database. This can provide a good opportunity for you to re-organize your hypertables if you need to. For example, you can change the partitioning key, the number of partitions, or the chunk interval sizes.

On a self hosted TimescaleDB instance with postgres superuser access you can take a complete dump of all PostgreSQL databases in a cluster including global objects that are common to all databases, namely database roles, tablespaces, and privilege grants using pg_dumpall. For more information about how to use the pg_dumpall utility, see PostgreSQL documentation.


Found an issue on this page?

Report an issue!