This section contains some ideas for troubleshooting common problems experienced with continuous aggregates.
Continuous aggregates use a watermark to indicate which time buckets have already been materialized. When you query a continuous aggregate, your query returns materialized data from before the watermark. It returns real-time, non-materialized data from after the watermark.
In certain cases, the watermark might be in the future. If this happens, all buckets, including the most recent bucket, are materialized and below the watermark. No real-time data is returned.
This might happen if you refresh your continuous aggregate over the time window
<START_TIME>, NULL, which materializes all recent data. It might also happen
if you create a continuous aggregate using the
WITH DATA option. This also
implicitly refreshes your continuous aggregate with a window of
To fix this, create a new continuous aggregate using the
WITH NO DATA option.
Then use a policy to refresh this continuous aggregate over an explicit time
Create a continuous aggregate using the
WITH NO DATAoption:CREATE MATERIALIZED VIEW <continuous_aggregate_name>WITH (timescaledb.continuous)AS SELECT time_bucket('<interval>', <time_column>),<other_columns_to_select>,...FROM <hypertable>GROUP BY bucket, <optional_other_columns>WITH NO DATA;
Refresh the continuous aggregate using a policy with an explicit
end_offset. For example:SELECT add_continuous_aggregate_policy('<continuous_aggregate_name>',start_offset => INTERVAL '30 day',end_offset => INTERVAL '1 hour',schedule_interval => INTERVAL '1 hour');
Check your new continuous aggregate's watermark to make sure it is in the past, not the future.
Get the ID for the materialization hypertable that contains the actual continuous aggregate data:SELECT id from _timescaledb_catalog.hypertableWHERE table_name=(SELECT materialization_hypertable_nameFROM timescaledb_information.continuous_aggregatesWHERE view_name='<continuous_aggregate_name');
Use the returned ID to query for the watermark's timestamp:SELECT COALESCE(_timescaledb_internal.to_timestamp(_timescaledb_internal.cagg_watermark(<ID>)),'-infinity'::timestamp with time zone);
If you choose to delete your old continuous aggregate after creating a new one, beware of historical data loss. If your old continuous aggregate contained data that you dropped from your original hypertable, for example through a data retention policy, the dropped data is not included in your new continuous aggregate.
ERROR: cannot create continuous aggregate with incompatible bucket widthDETAIL: Time bucket width of "<BUCKET>" [1 year] should be multiple of the time bucket width of "<BUCKET>" [1 day].
If you attempt to create a hierarchical continuous aggregate, you must use compatible time buckets. You can't create a continuous aggregate with a fixed-width time bucket on top of a continuous aggregate with a variable-width time bucket. For more information, see the restrictions section in hierarchical continuous aggregates.Products:
A retention policy set on a hypertable does not apply to any continuous aggregates made from the hypertable. This allows you to set different retention periods for raw and summarized data. To apply a retention policy to a continuous aggregate, set the policy on the continuous aggregate itself.Products:
Materialized views are generally used with ordered data. If you insert historic data, or data that is not related to the current time, you need to refresh policies and reevaluate the values that are dragging from past to present.
You can set up an after insert rule for your hypertable or upsert to trigger something that can validate what needs to be refreshed as the data is merged.
Let's say you inserted ordered timeframes named A, B, D, and F, and you already have a continuous aggregation looking for this data. If you now insert E, you need to refresh E and F. However, if you insert C we'll need to refresh C, D, E and F.
- A, B, D, and F are already materialized in a view with all data.
- To insert C, split the data into
ABare consistent and the materialized data is too; you only need to reuse it.
- Insert C,
DEF, and refresh policies after C.
This can use a lot of resources to process, especially if you have any important data in the past that also needs to be brought to the present.
Consider an example where you have 300 columns on a single hypertable and use, for example, five of them in a continuous aggregation. In this case, it could be hard to refresh and would make more sense to isolate these columns in another hypertable. Alternatively, you might create one hypertable per metric and refresh them independently.Products:
You might get a permissions error when migrating a continuous aggregate from old
to new format using
cagg_migrate. The user performing the migration must have
the following permissions:
- Select, insert, and update permissions on the tables
- Usage permissions on the sequence
To solve the problem, change to a user capable of granting permissions, and grant the following permissions to the user performing the migration:
GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan TO <USER>;GRANT SELECT, INSERT, UPDATE ON TABLE _timescaledb_catalog.continuous_agg_migrate_plan_step TO <USER>;GRANT USAGE ON SEQUENCE _timescaledb_catalog.continuous_agg_migrate_plan_step_step_id_seq TO <USER>;
ERROR: invalid continuous aggregate viewSQL state: 0A000
Continuous aggregates don't work on all queries. If you are using a function that continuous aggregates do not support, you see the error above.
TimescaleDB doesn't support window functions on continuous aggregates. In versions earlier than 2.7, it doesn't support any non-parallelizable SQL aggregates.
This table summarizes aggregate function support in continuous aggregates:
|Function, clause, or feature||TimescaleDB 2.6 and earlier||TimescaleDB 2.7, 2.8, and 2.9||TimescaleDB 2.10 and later|
|Parallelizable aggregate functions||✅||✅||✅|
|Non-parallelizable aggregate functions||❌||✅||✅|
Real-time aggregates automatically add the most recent data when you query your continuous aggregate. In other words, they include data more recent than your last materialized bucket.
If you add new historical data to an already-materialized bucket, it won't be
reflected in a real-time aggregate. You should wait for the next scheduled
refresh, or manually refresh by calling
refresh_continuous_aggregate. You can
think of real-time aggregates as being eventually consistent for historical
The following example shows how this works.
Create and fill the hypertable:
CREATE TABLE conditions(day DATE NOT NULL,city text NOT NULL,temperature INT NOT NULL);SELECT create_hypertable('conditions', 'day',chunk_time_interval => INTERVAL '1 day');INSERT INTO conditions (day, city, temperature) VALUES('2021-06-14', 'Moscow', 26),('2021-06-15', 'Moscow', 22),('2021-06-16', 'Moscow', 24),('2021-06-17', 'Moscow', 24),('2021-06-18', 'Moscow', 27),('2021-06-19', 'Moscow', 28),('2021-06-20', 'Moscow', 30),('2021-06-21', 'Moscow', 31),('2021-06-22', 'Moscow', 34),('2021-06-23', 'Moscow', 34),('2021-06-24', 'Moscow', 34),('2021-06-25', 'Moscow', 32),('2021-06-26', 'Moscow', 32),('2021-06-27', 'Moscow', 31);
Create a continuous aggregate but do not materialize any data. Note that real time aggregation is enabled by default:
CREATE MATERIALIZED VIEW conditions_summaryWITH (timescaledb.continuous) ASSELECT city,time_bucket('7 days', day) AS bucket,MIN(temperature),MAX(temperature)FROM conditionsGROUP BY city, bucketWITH NO DATA;The select query returns data as real time aggregates are enabled. The query onthe continuous aggregate fetches data directly from the hypertable:SELECT * FROM conditions_summary ORDER BY bucket;city | bucket | min | max--------+------------+-----+-----Moscow | 2021-06-14 | 22 | 30Moscow | 2021-06-21 | 31 | 34
Materialize data into the continuous aggregate:
CALL refresh_continuous_aggregate('conditions_summary', '2021-06-14', '2021-06-21');The select query returns the same data, as expected, but this time the data isfetched from the underlying materialized tableSELECT * FROM conditions_summary ORDER BY bucket;city | bucket | min | max--------+------------+-----+-----Moscow | 2021-06-14 | 22 | 30Moscow | 2021-06-21 | 31 | 34
Update the data in the previously materialized bucket:
UPDATE conditionsSET temperature = 35WHERE day = '2021-06-14' and city = 'Moscow';
The updated data is not yet visible when you query the continuous aggregate. This is because these changes have not been materialized.( Similarly, any INSERTs or DELETEs would also not be visible).
SELECT * FROM conditions_summary ORDER BY bucket;city | bucket | min | max--------+------------+-----+-----Moscow | 2021-06-14 | 22 | 30Moscow | 2021-06-21 | 31 | 34
Refresh the data again to update the previously materialized region:
CALL refresh_continuous_aggregate('conditions_summary', '2021-06-14', '2021-06-21');SELECT * FROM conditions_summary ORDER BY bucket;city | bucket | min | max--------+------------+-----+-----Moscow | 2021-06-14 | 22 | 35Moscow | 2021-06-21 | 31 | 34
ERROR: cannot update/delete rows from chunk <CHUNK_NAME> as it is compressed
Compressed chunks of a continuous aggregate can't be refreshed. This follows from a general limitation where compressed chunks can't be updated or deleted.
If you receive historical data and must refresh a compressed region, first
decompress the chunk. Then manually run
When you have a query that uses a last observation carried forward (locf)
function, the query carries forward NULL values by default. If you want the
function to ignore NULL values instead, you can set
as the second parameter in the query. For example:
dev=# select * FROM (select time_bucket_gapfill(4, time,-5,13), locf(avg(v)::int,treat_null_as_missing:=true) FROM (VALUES (0,0),(8,NULL)) v(time, v) WHERE time BETWEEN 0 AND 10 GROUP BY 1) i ORDER BY 1 DESC;time_bucket_gapfill | locf---------------------+------12 | 08 | 04 | 00 | 0-4 |-8 |(6 rows)
Your scheduled jobs might stop running for various reasons. On self-hosted TimescaleDB, you can fix this by restarting background workers:
On Timescale and Managed Service for TimescaleDB, restart background workers by doing one of the following:
SELECT timescaledb_pre_restore(), followed by
- Power the service off and on again. This might cause a downtime of a few minutes while the service restores from backup and replays the write-ahead log.
Found an issue on this page?Report an issue!