Timescale automatically supports
INSERTs into compressed chunks. But if you
need to insert a lot of data, for example as part of a bulk backfilling
operation, you should first decompress the chunk. Inserting data into a
compressed chunk is more computationally expensive than inserting data into an
uncompressed chunk. This adds up over a lot of rows.
When compressing your data, you can reduce the amount of storage space for your Timescale instance. But you should always leave some additional storage capacity. This gives you the flexibility to decompress chunks when necessary, for actions such as bulk inserts.
This section describes commands to use for decompressing chunks. You can filter by time to select the chunks you want to decompress. To learn how to backfill data, see the backfilling section.
There are several methods for selecting chunks and decompressing them.
Before decompressing chunks, stop any compression policy on the hypertable you are decompressing. When you finish backfilling or updating data, turn the policy back on. The database automatically recompresses your chunks in the next scheduled job.
To decompress a single chunk by name, run this command:
<chunk_name> is the name of the chunk you want to decompress.
To decompress a set of chunks based on a time range, you can use the output of
show_chunks to decompress each one:
SELECT decompress_chunk(c, true)FROM show_chunks('table_name', older_than, newer_than) c;
For more information about the
decompress_chunk function, see the
If you want to use more precise matching constraints, for example space partitioning, you can construct a command like this:
SELECT tableoid::regclass FROM metricsWHERE time = '2000-01-01' AND device_id = 1GROUP BY tableoid;tableoid------------------------------------------_timescaledb_internal._hyper_72_37_chunk
Found an issue on this page?Report an issue!