If you are running Timescale on your own hardware, you can save storage by moving chunks between tablespaces. By moving older chunks to cheaper, slower storage, you can save on storage costs while still using faster, more expensive storage for frequently accessed data. Moving infrequently accessed chunks can also improve performance, because it isolates historical data from the continual read-and-write workload of more recent data.
Note
Using tablespaces is one way to manage data storage costs with Timescale. You can also use compression and data retention to reduce your storage requirements.
Try for free on Timescale
Timescale is a fully managed service with automatic backup and restore, high availability with replication, seamless scaling and resizing, and much more. You can try Timescale free for thirty days.
To move chunks to a new tablespace, you first need to create the new tablespace
and set the storage mount point. You can then use the
move_chunk
API call to move individual chunks from the
default tablespace to the new tablespace. The move_chunk
command also allows
you to move indexes belonging to those chunks to an appropriate tablespace.
Additionally, move_chunk
allows you reorder the chunk during the migration.
This can be used to make your queries faster, and works in a similar way to the
reorder_chunk
command.
Note
You must be logged in as a super user, such as the postgres
user, to use the move_chunk()
API call.
Create a new tablespace. In this example, the tablespace is called
history
, it is owned by thepostgres
super user, and the mount point is/mnt/history
:CREATE TABLESPACE historyOWNER postgresLOCATION '/mnt/history';List chunks that you want to move. In this example, chunks that contain data that is older than two days:
SELECT show_chunks('conditions', older_than => INTERVAL '2 days');Move a chunk and its index to the new tablespace. You can also reorder the data in this step. In this example, the chunk called
_timescaledb_internal._hyper_1_4_chunk
is moved to thehistory
tablespace, and is reordered based on its time index:SELECT move_chunk(chunk => '_timescaledb_internal._hyper_1_4_chunk',destination_tablespace => 'history',index_destination_tablespace => 'history',reorder_index => '_timescaledb_internal._hyper_1_4_chunk_netdata_time_idx',verbose => TRUE);You can verify that the chunk now resides in the correct tablespace by querying
pg_tables
to list all of the chunks on the tablespace:SELECT tablename from pg_tablesWHERE tablespace = 'history' and tablename like '_hyper_%_%_chunk';You can also verify that the index is in the correct location:
SELECT indexname FROM pg_indexes WHERE tablespace = 'history';
To move several chunks at once, select the chunks you want to move by using
FROM show_chunks(...)
. For example, to move chunks containing data between 1
and 3 weeks old, in a hypertable named example
:
SELECT move_chunk(chunk => i,destination_tablespace => '<TABLESPACE>')FROM show_chunks('example', now() - INTERVAL '1 week', now() - INTERVAL '3 weeks') i;
After moving a chunk to a slower tablespace, you can move it back to the default, faster tablespace:
SELECT move_chunk(chunk => '_timescaledb_internal._hyper_1_4_chunk',destination_tablespace => 'pg_default',index_destination_tablespace => 'pg_default',reorder_index => '_timescaledb_internal._hyper_1_4_chunk_netdata_time_idx');
You can move a data chunk to the slower tablespace, but keep the chunk's indexes on the default, faster tablespace:
SELECT move_chunk(chunk => '_timescaledb_internal._hyper_1_4_chunk',destination_tablespace => 'history',index_destination_tablespace => 'pg_default',reorder_index => '_timescaledb_internal._hyper_1_4_chunk_netdata_time_idx');
You can also keep the data in pg_default
but move the index to history
.
Alternatively, you can set up a third tablespace called history_indexes
,
and move the data to history
and the indexes to history_indexes
.
In Timescale 2.0 and later, you can use move_chunk
with the job scheduler
framework. For more information, see the user-defined actions section.
Keywords
Found an issue on this page?Report an issue or Edit this page in GitHub.