The following instructions show you how to move your data from self-hosted PostgreSQL to a Timescale instance using pg_dump and psql. In order to not lose any data, applications which connect to the database should be taken offline. The duration of the migration is proportional to the amount of data stored in your database.

This migration method only moves the data, but does not enable Timescale features like hypertables, data compression or retention. These must be manually enabled after the migration, for which you must also take your application offline.

We do not recommend using this migration method to migrate more than 100 GB of data, primarily because of the amount of downtime that it implies for your application, instead use the dual-write and backfill low-downtime migration solution. Should you nonetheless wish to migrate more than 400 GB of data with this method, open a support request to ensure that enough disk is pre-provisioned on your Timescale instance.

You can open a support request directly from the Timescale console, or by email to [email protected].

Note

In the context of migrations, your existing production database is referred to as the "source" database, while the new Timescale database that you intend to migrate your data to is referred to as the "target" database.

For minimal downtime, the pg_dump and psql commands should be run from a machine with a low-latency, high-throughput link to the database that they are connected to. As Timescale instances run in the Amazon cloud, use an AWS EC2 instance in the same region as your Timescale instance.

Before you begin, ensure that you have:

  • Installed the PostgreSQL client libraries on the machine that you will perform the migration from, you will require pg_dump and psql.
  • Created a database service in Timescale.
  • Checked that all PostgreSQL extensions you use are available on Timescale. For more information, see the list of compatible extensions.
  • Checked that the version of PostgreSQL in your target database is greater than or equal to that of the source database.

Dump the roles from the source database (only necessary if you're using roles other than the default postgres role in your database):

pg_dumpall -d "$SOURCE" \
--quote-all-identifiers \
--roles-only \
--file=roles.sql
Note

For the sake of convenience, connection strings to the source and target databases are referred to as $SOURCE and $TARGET throughout this guide. This can be set in your shell, for example:

export SOURCE=postgres://<user>:<password>@<source host>:<source port>/<dbname>
export TARGET=postgres://<user>:<password>@<target host>:<target port>/<dbname>

Dump the source database schema and data:

pg_dump -d "$SOURCE" \
--format=plain \
--quote-all-identifiers \
--no-tablespaces \
--no-owner \
--no-privileges \
--file=dump.sql
Note

It is possible to dump using multiple connections to the source database, which may dramatically reduce the time taken to dump the source database. For more information, see dumping with concurrency and restoring with concurrency.

The following is a brief explanation of the flags used:

  • --no-tablespaces is required because Timescale does not support tablespaces other than the default. This is a known limitation.

  • --no-owner is required because Timescale's tsdbadmin user is not a superuser and cannot assign ownership in all cases. This flag means that everything is owned by the user used to connect to the target, regardless of ownership in the source. This is a known limitation.

  • --no-privileges is required because Timescale's tsdbadmin user is not a superuser and cannot assign privileges in all cases. This flag means that privileges assigned to other users must be reassigned in the target database as a manual clean-up task. This is a known limitation.

Load the dumped roles and data into the target database:

psql $TARGET -v ON_ERROR_STOP=1 --echo-errors \
-f roles.sql \
-f dump.sql

Update the table statistics by running ANALYZE on all data:

psql $TARGET -c "ANALYZE;"

Verify that the data has been successfully restored by connecting to the target database and querying the restored data.

Once you have verified that the data is present, and returns the results that you expect, you can reconfigure your application to use the target database and start it.

Keywords

Found an issue on this page?

Report an issue!