Functions that depend on a local timezone setting inside a continuous aggregate are not supported. You cannot adjust to a local time because the timezone setting changes from user to user.
To manage this, you can use explicit timezones in the view definition. Alternatively, you can create your own custom aggregation scheme for tables that use an integer time column.
The most common method of working with timezones is to declare an explicit timezone in the view query.
At the
psql
prompt, create the view and declare the timezone:CREATE MATERIALIZED VIEW device_summaryWITH (timescaledb.continuous)ASSELECTtime_bucket('1 hour', observation_time) AS bucket,min(observation_time AT TIME ZONE 'EST') AS min_time,device_id,avg(metric) AS metric_avg,max(metric) - min(metric) AS metric_spreadFROMdevice_readingsGROUP BY bucket, device_id;Alternatively, you can cast to a timestamp after the view using
SELECT
:SELECT min_time::timestamp FROM device_summary;
Date and time is usually expressed as year-month-day and hours:minutes:seconds. Most Timescale databases use a date/time-type column to express the date and time. However, in some cases, you might need to convert these common time and date formats to a format that uses an integer. The most common integer time is Unix epoch time, which is the number of seconds since the Unix epoch of 1970-01-01, but other types of integer-based time formats are possible.
These examples use a hypertable called devices
that contains CPU and disk
usage information. The devices measure time using the Unix epoch.
To create a hypertable that uses an integer-based column as time, you need to provide the chunk time interval. In this case, each chunk is 10 minutes.
At the
psql
prompt, create a table and define the integer-based time column:CREATE TABLE devices(time BIGINT, -- Time in minutes since epochcpu_usage INTEGER, -- Total CPU usagedisk_usage INTEGER, -- Total disk usagePRIMARY KEY (time));Define the chunk time interval:
SELECT create_hypertable('devices', by_range('time', 10));
To define a continuous aggregate on a hypertable that uses integer-based time,
you need to have a function to get the current time in the correct format, and
set it for the hypertable. You can do this with the
set_integer_now_func
function. It can be defined as a regular PostgreSQL function, but needs to be
STABLE
,
take no arguments, and return an integer value of the same type as the time
column in the table. When you have set up the time-handling, you can create the
continuous aggregate.
At the
psql
prompt, set up a function to convert the time to the Unix epoch:CREATE FUNCTION current_epoch() RETURNS BIGINTLANGUAGE SQL STABLE AS $$SELECT EXTRACT(EPOCH FROM CURRENT_TIMESTAMP)::bigint;$$;SELECT set_integer_now_func('devices', 'current_epoch');Create the continuous aggregate for the
devices
table:CREATE MATERIALIZED VIEW devices_summaryWITH (timescaledb.continuous) ASSELECT time_bucket('500', time) AS bucket,avg(cpu_usage) AS avg_cpu,avg(disk_usage) AS avg_diskFROM devicesGROUP BY bucket;Insert some rows into the table:
CREATE EXTENSION tablefunc;INSERT INTO devices(time, cpu_usage, disk_usage)SELECT time,normal_rand(1,70,10) AS cpu_usage,normal_rand(1,2,1) * (row_number() over()) AS disk_usageFROM generate_series(1,10000) AS time;This command uses the
tablefunc
extension to generate a normal distribution, and uses therow_number
function to turn it into a cumulative sequence.Check that the view contains the correct data:
postgres=# SELECT * FROM devices_summary ORDER BY bucket LIMIT 10;bucket | avg_cpu | avg_disk--------+---------------------+----------------------0 | 63.0000000000000000 | 6.00000000000000005 | 69.8000000000000000 | 9.600000000000000010 | 70.8000000000000000 | 24.000000000000000015 | 75.8000000000000000 | 37.600000000000000020 | 71.6000000000000000 | 26.800000000000000025 | 67.6000000000000000 | 56.000000000000000030 | 68.8000000000000000 | 90.200000000000000035 | 71.6000000000000000 | 88.800000000000000040 | 66.4000000000000000 | 81.200000000000000045 | 68.2000000000000000 | 106.0000000000000000(10 rows)
Keywords
Found an issue on this page?Report an issue or Edit this page in GitHub.