Timescale Cloud: Performance, Scale, Enterprise
Self-hosted products
MST
This section contains some ideas for troubleshooting common problems experienced with hypercore.
ERROR: temporary file size exceeds temp_file_limit
When you try to convert a chunk to the columnstore, especially if the chunk is very large, you
could get this error. Compression operations write files to a new compressed
chunk table, which is written in temporary memory. The maximum amount of
temporary memory available is determined by the temp_file_limit
parameter. You
can work around this problem by adjusting the temp_file_limit
and
maintenance_work_mem
parameters.
ERROR: tuple decompression limit exceeded by operation
When inserting, updating, or deleting tuples from chunks in the columnstore, it might be necessary to convert tuples to the rowstore. This happens either when you are updating existing tuples or have constraints that need to be verified during insert time. If you happen to trigger a lot of rowstore conversion with a single command, you may end up running out of storage space. For this reason, a limit has been put in place on the number of tuples you can decompress into the rowstore for a single command.
The limit can be increased or turned off (set to 0) like so:
-- set limit to a milion tuplesSET timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;-- disable limit by setting to 0SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
compress_chunk_time_interval configured and primary dimension not first column in compress_orderby.consider setting "<column name>" as first compress_orderby column
When you configure compress_chunk_time_interval
but do not set the primary dimension as the first column in compress_orderby
, TimescaleDB decompresses chunks before merging. This makes merging less efficient. Set the primary dimension of the chunk as the first column in compress_orderby
to improve efficiency.
Low compression rates are often caused by high cardinality of the segment key. This means that the column you selected for grouping the rows during compression has too many unique values. This makes it impossible to group a lot of rows in a batch. To achieve better compression results, choose a segment key with lower cardinality.
ERROR: must be owner of hypertable "HYPERTABLE_NAME"
You might get this error if you attempt to compress a chunk into the columnstore, or decompress it back into rowstore with a non-privileged user
account. To compress or decompress a chunk, your user account must have permissions that allow it to perform CREATE INDEX
on the
chunk. You can check the permissions of the current user with this command at
the psql
command prompt:
\dn+ <USERNAME>
To resolve this problem, grant your user account the appropriate privileges with this command:
GRANT PRIVILEGESON TABLE <TABLE_NAME>TO <ROLE_TYPE>;
For more information about the GRANT
command, see the
PostgreSQL documentation.
ERROR: invalid attribute number -6 for _hyper_2_839_chunkCONTEXT: SQL function "hypertable_local_size" statement 1 PL/pgSQL function hypertable_detailed_size(regclass) line 26 at RETURN QUERY SQL function "hypertable_size" statement 1SQL state: XX000
You might see this error if your hypertable indexes have become very large. To resolve the problem, reindex your hypertables with this command:
reindex table _timescaledb_internal._hyper_2_1523284_chunk
For more information, see the hypertable documentation.
Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:
On Timescale and Managed Service for TimescaleDB, restart background workers by doing one of the following:
- Run
SELECT timescaledb_pre_restore()
, followed bySELECT timescaledb_post_restore()
. - Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.