This section contains some ideas for troubleshooting common problems experienced with hypertables.
ERROR: temporary file size exceeds temp_file_limit
When you try to compress a chunk, especially if the chunk is very large, you
could get this error. Compression operations write files to a new compressed
chunk table, which is written in temporary memory. The maximum amount of
temporary memory available is determined by the temp_file_limit
parameter. You
can work around this problem by adjusting the temp_file_limit
and
maintenance_work_mem
parameters.
ERROR: cannot add column with constraints or defaults to a hypertable that has compression enabled
If you attempt to add a column with constraints or defaults to a hypertable that has compression enabled, you might get this error. To add the column, you need to decompress the data in the hypertable, add the column, and then compress the data.
ERROR: tuple decompression limit exceeded by operation
When inserting, updating, or deleting tuples from compressed chunks it might be necessary to decompress tuples. This happens either when you are updating existing tuples or have constraints that need to be verified during insert time. If you happen to trigger a lot of decompression with a single command, you may end up running out of storage space. For this reason, a limit has been put in place on the number of tuples you can decompress for a single command.
The limit can be increased or turned off (set to 0) like so:
-- set limit to a milion tuplesSET timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;-- disable limit by setting to 0SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
ERROR: must be owner of hypertable "HYPERTABLE_NAME"
If you attempt to compress or decompress a chunk with a non-privileged user
account, you might get this error. To compress or decompress a chunk, your user
account must have permissions that allow it to perform CREATE INDEX
on the
chunk. You can check the permissions of the current user with this command at
the psql
command prompt:
\dn+ <USERNAME>
To resolve this problem, grant your user account the appropriate privileges with this command:
GRANT PRIVILEGESON TABLE <TABLE_NAME>TO <ROLE_TYPE>;
For more information about the GRANT
command, see the
PostgreSQL documentation.
When you drop a chunk, it requires an exclusive lock. If a chunk is being accessed by another session, you cannot drop the chunk at the same time. If a drop chunk operation can't get the lock on the chunk, then it times out and the process fails. To resolve this problem, check what is locking the chunk. In some cases, this could be caused by a continuous aggregate or other process accessing the chunk. When the drop chunk operation can get an exclusive lock on the chunk, it completes as expected.
For more information about locks, see the PostgreSQL lock monitoring documentation.
ERROR: cannot create a unique index without the column "<COLUMN_NAME>" (used in partitioning)
You might get a unique index and partitioning column error in 2 situations:
- When creating a primary key or unique index on a hypertable
- When creating a hypertable from a table that already has a unique index or primary key
For more information on how to fix this problem, see the section on creating unique indexes on hypertables.
ERROR: invalid attribute number -6 for _hyper_2_839_chunkCONTEXT: SQL function "hypertable_local_size" statement 1 PL/pgSQL function hypertable_detailed_size(regclass) line 26 at RETURN QUERY SQL function "hypertable_size" statement 1SQL state: XX000
You might see this error if your hypertable indexes have become very large. To resolve the problem, reindex your hypertables with this command:
reindex table _timescaledb_internal._hyper_2_1523284_chunk
For more information, see the hypertable documentation.
Keywords
Found an issue on this page?Report an issue or Edit this page in GitHub.