You can backup and restore an entire database or individual hypertables using
the native PostgreSQL pg_dump
and pg_restore
commands. This works even for compressed hypertables, without having to
decompress the chunks before you begin.
Try for free on Timescale
Timescale is a fully managed service with automatic backup and restore, high availability with replication, seamless scaling and resizing, and much more. You can try Timescale free for thirty days.
Upgrades between different versions of TimescaleDB can be done in place; you don't need to backup and restore your data. See the upgrading instructions.
Warning
If you are using this pg_dump
backup method regularly, make sure you keep track of which versions of PostgreSQL and TimescaleDB you are running. For more information, see "Versions are mismatched when dumping and restoring a database" in the Troubleshooting section.
You can perform a backup using the pg_dump
command at the command prompt. For
example, to backup a database named tsdb
:
pg_dump -Fc -f tsdb.bak tsdb
To backup a database named tsdb
hosted on a remote server:
pg_dump -h <REMOTE_HOST> -p 55555 -U tsdbadmin -Fc -f tsdb.bak tsdb
You might see some errors when running pg_dump
. To learn if they can be safely
ignored, see the troubleshooting section.
Warning
Do not use the pg_dump
command to backup individual hypertables. Dumps created using this method lack the necessary information to correctly restore the hypertable from backup.
When you need to restore data from a backup, you can use psql
to create a new
database and restore the data.
In
psql
, create a new database to restore to, and connect to it:CREATE DATABASE tsdb;\c tsdbCREATE EXTENSION IF NOT EXISTS timescaledb;Run timescaledb_pre_restore to put your database in the right state for restoring:
SELECT timescaledb_pre_restore();Restore the database:
\! pg_restore -Fc -d tsdb tsdb.bakRun
timescaledb_post_restore
to return your database to normal operations:SELECT timescaledb_post_restore();
Warning
Do not use the pg_restore
command with -j option. This option does not correctly restore the TimescaleDB catalogs.
The pg_dump
command provides flags that allow you to specify tables or schemas
to back up. However, using these flags means that the dump lacks necessary
information that TimescaleDB requires to understand the relationship between
them. Even if you explicitly specify both the hypertable and all of its
constituent chunks, the dump would still not contain all the information it
needs to recreate the hypertable on restore.
Warning
Do not use the pg_dump
command to backup individual hypertables. Dumps created using this method lack the necessary information to correctly restore the hypertable from backup.
You can backup individual hypertables by backing up the entire database, and then excluding the tables you do not want to backup. You can also use this method to backup individual plain tables that are not hypertables.
At the command prompt, back up the hypertable schema:
pg_dump -s -d old_db --table conditions -N _timescaledb_internal | \grep -v _timescaledb_internal > schema.sqlBackup the hypertable data to a CSV file:
psql -d old_db \-c "\COPY (SELECT * FROM conditions) TO data.csv DELIMITER ',' CSV"
At the command prompt, restore the schema:
psql -d new_db < schema.sqlRecreate the hypertables:
psql -d new_db -c "SELECT create_hypertable('conditions', 'time')"Restore the data:
psql -d new_db -c "\COPY conditions FROM data.csv CSV"The standard
COPY
command in PostgreSQL is single threaded. If you have a lot of data, you can speed up the copy using the parallel importer instead.
When you create the new hypertable with the create_hypertable
command, you
do not need to use the same parameters as existed in the old database. This
can provide a good opportunity for you to re-organize your hypertables if
you need to. For example, you can change the partitioning key, the number of
partitions, or the chunk interval sizes.
On a self hosted TimescaleDB instance with postgres
superuser access you can
take a complete dump of all PostgreSQL databases in a cluster including global
objects that are common to all databases, namely database roles, tablespaces,
and privilege grants using pg_dumpall
. For more
information about how to use the pg_dumpall
utility, see
PostgreSQL documentation.
Keywords
Found an issue on this page?
Report an issue!