This section contains some ideas for troubleshooting common problems experienced with continuous aggregates.
Materialized views are generally used with ordered data. If you insert historic data, or data that is not related to the current time, you need to refresh policies and reevaluate the values that are dragging from past to present.
You can set up an after insert rule for your hypertable or upsert to trigger something that can validate what needs to be refreshed as the data is merged.
Let's say you inserted ordered timeframes named A, B, D, and F, and you already have a continuous aggregation looking for this data. If you now insert E, you need to refresh E and F. However, if you insert C we'll need to refresh C, D, E and F.
ABare consistent and the materialized data is too; you only need to reuse it.
DEF, and refresh policies after C.
This can use a lot of resources to process, especially if you have any important data in the past that also needs to be brought to the present.
Consider an example where you have 300 columns on a single hypertable and use, for example, five of them in a continuous aggregation. In this case, it could be hard to refresh and would make more sense to isolate these columns in another hypertable. Alternatively, you might create one hypertable per metric and refresh them independently.
Continuous aggregates don't work on all queries. If you are using a function that continuous aggregates do not support, you see an error like this:
ERROR: invalid continuous aggregate view SQL state: 0A000
Continuous aggregates are supported for most aggregate functions that can be
parallelized by PostgreSQL, including the standard
AVG. You can also use more complex expressions on
top of the aggregate functions, for example
However, aggregates using
ORDER BY and
DISTINCT cannot be used with
continuous aggregates since they are not possible to parallelize with
PostgreSQL. TimescaleDB does not currently support
or window functions in continuous aggregates.
When you have a query that uses a last observation carried forward (locf)
function, the query carries forward NULL values by default. If you want the
function to ignore NULL values instead, you can set
as the second parameter in the query. For example:
dev=# select * FROM (select time_bucket_gapfill(4, time,-5,13), locf(avg(v)::int,treat_null_as_missing:=true) FROM (VALUES (0,0),(8,NULL)) v(time, v) WHERE time BETWEEN 0 AND 10 GROUP BY 1) i ORDER BY 1 DESC; time_bucket_gapfill | locf ---------------------+------ 12 | 0 8 | 0 4 | 0 0 | 0 -4 | -8 | (6 rows)
Found an issue on this page?Report an issue!