Timescale automatically supports INSERT
s into compressed chunks. But if you
need to insert a lot of data, for example as part of a bulk backfilling
operation, you should first decompress the chunk. Inserting data into a
compressed chunk is more computationally expensive than inserting data into an
uncompressed chunk. This adds up over a lot of rows.
Important
When compressing your data, you can reduce the amount of storage space for your Timescale instance. But you should always leave some additional storage capacity. This gives you the flexibility to decompress chunks when necessary, for actions such as bulk inserts.
This section describes commands to use for decompressing chunks. You can filter by time to select the chunks you want to decompress.
Before decompressing chunks, stop any compression policy on the hypertable you are decompressing. When you finish backfilling or updating data, turn the policy back on. The database automatically recompresses your
chunks in the next scheduled job. For more information on how to stop and run compression policies with the alter_job()
function, see the API reference.
There are several methods for selecting chunks and decompressing them.
To decompress a single chunk by name, run this command:
SELECT decompress_chunk('_timescaledb_internal.<chunk_name>');
where, <chunk_name>
is the name of the chunk you want to decompress.
To decompress a set of chunks based on a time range, you can use the output of
show_chunks
to decompress each one:
SELECT decompress_chunk(c, true)FROM show_chunks('table_name', older_than, newer_than) c;
For more information about the decompress_chunk
function, see the decompress_chunk
API reference.
If you want to use more precise matching constraints, for example space partitioning, you can construct a command like this:
SELECT tableoid::regclass FROM metricsWHERE time = '2000-01-01' AND device_id = 1GROUP BY tableoid;tableoid------------------------------------------_timescaledb_internal._hyper_72_37_chunk
Keywords
Found an issue on this page?Report an issue or Edit this page in GitHub.