This section contains some ideas for troubleshooting common problems experienced with compression.

ERROR: cannot add column with constraints or defaults to a hypertable that has compression enabled

If you attempt to add a column with constraints or defaults to a hypertable that has compression enabled, you might get this error. To add the column, you need to decompress the data in the hypertable, add the column, and then compress the data.

ERROR: tuple decompression limit exceeded by operation

When inserting, updating, or deleting tuples from compressed chunks it might be necessary to decompress tuples. This happens either when you are updating existing tuples or have constraints that need to be verified during insert time. If you happen to trigger a lot of decompression with a single command, you may end up running out of storage space. For this reason, a limit has been put in place on the number of tuples you can decompress for a single command.

The limit can be increased or turned off (set to 0) like so:

-- set limit to a milion tuples
SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;
-- disable limit by setting to 0
SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
ERROR: must be owner of hypertable "HYPERTABLE_NAME"

If you attempt to compress or decompress a chunk with a non-privileged user account, you might get this error. To compress or decompress a chunk, your user account must have permissions that allow it to perform CREATE INDEX on the chunk. You can check the permissions of the current user with this command at the psql command prompt:

\dn+ <USERNAME>

To resolve this problem, grant your user account the appropriate privileges with this command:

GRANT PRIVILEGES
ON TABLE <TABLE_NAME>
TO <ROLE_TYPE>;

For more information about the GRANT command, see the PostgreSQL documentation.

ERROR: invalid attribute number -6 for _hyper_2_839_chunk
CONTEXT: SQL function "hypertable_local_size" statement 1 PL/pgSQL function hypertable_detailed_size(regclass) line 26 at RETURN QUERY SQL function "hypertable_size" statement 1
SQL state: XX000

You might see this error if your hypertable indexes have become very large. To resolve the problem, reindex your hypertables with this command:

reindex table _timescaledb_internal._hyper_2_1523284_chunk

For more information, see the hypertable documentation.

Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:

SELECT _timescaledb_functions.start_background_workers();

On Timescale and Managed Service for TimescaleDB, restart background workers by doing one of the following:

  • Run SELECT timescaledb_pre_restore(), followed by SELECT timescaledb_post_restore().
  • Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.
ERROR: temporary file size exceeds temp_file_limit

When you try to compress a chunk, especially if the chunk is very large, you could get this error. Compression operations write files to a new compressed chunk table, which is written in temporary memory. The maximum amount of temporary memory available is determined by the temp_file_limit parameter. You can work around this problem by adjusting the temp_file_limit and maintenance_work_mem parameters.

Keywords

Found an issue on this page?

Report an issue!