Versions:

TimescaleDB provides many SQL functions and views to help you interact with and manage your data. See a full list below or search by keyword to find reference documentation for a specific API.

Suggested filters

### Hypertables and chunks

- add_dimension
- Add a space-partitioning dimension to a hypertable
- add_reorder_policy
- Add a policy to reorder rows in hypertable chunks
- attach_tablespace
- Attach a tablespace to a hypertable
- chunks_detailed_size
- Get detailed information about disk space used by chunks
- create_hypertable
- Create a hypertable
- CREATE INDEX (Transaction Per Chunk)
- Create a hypertable index using a separate transaction for each chunk
- detach_tablespace
- Detach a tablespace from a hypertable
- detach_tablespaces
- Detach all tablespaces from a hypertable
- hypertable_detailed_size
- Get detailed information about disk space used by a hypertable
- hypertable_index_size
- Get the disk space used by a hypertable index
- hypertable_size
- Get the total disk space used by a hypertable
- move_chunk
- Move a chunk and its indexes to a different tablespace
- remove_reorder_policy
- Remove a reorder policy from a hypertable
- reorder_chunk
- Reorder rows in a chunk
- set_chunk_time_interval
- Change the chunk time interval of a hypertable
- set_integer_now_fun
- Define the relationship between integer time values and actual time
- show_chunks
- Show the chunks belonging to a hypertable
- show_tablespaces
- Show the tablespaces attached to a hypertable

### Continuous aggregates

- add_continuous_aggregate_policy
- Add policy to schedule automatic refresh of a continuous aggregate
- add_policies
- Add refresh, compression, and data retention policies on a continuous aggregate
- ALTER MATERLIALIZED VIEW (Continuous Aggregate)
- Change an existing continuous aggregate
- alter_policies
- Alter refresh, compression, or data retention policies on a continuous aggregate
- cagg_migrate
- Migrate a continuous aggregate from the old format to the new format introduced in TimescaleDB 2.7
- CREATE MATERIALIZED VIEW (Continuous Aggregate)
- Create a continuous aggregate on a hypertable or another continuous aggregate
- DROP MATERIALIZED VIEW (Continuous Aggregate)
- Drop a continuous aggregate view
- refresh_continuous_aggregate
- Manually refresh a continuous aggregate
- remove_all_policies
- Remove all policies from a continuous aggregate
- remove_policies
- Remove refresh, compression, or data retention policies from a continuous aggregate
- remove_continuous_aggregate_policy
- Remove a refresh policy from a continuous aggregate
- show_policies
- Show all policies that are currently set on a continuous aggregate

### Compression

- add_compression_policy
- Add policy to schedule automatic compression of chunks
- ALTER TABLE (Compression)
- Change compression settings on a compressed hypertable
- chunk_compression_stats
- Get compression-related statistics for chunks
- compress_chunk
- Manually compress a given chunk
- decompress_chunk
- Decompress a compressed chunk
- hypertable_compression_stats
- Get hypertable statistics related to compression
- recompress_chunk
- Recompress a chunk that had new data inserted after compression
- remove_compression_policy
- Remove a compression policy from a hypertable

### Data retention

- add_retention_policy
- Add a policy to drop older chunks
- remove_retention_policy
- Remove a retention policy from a hypertable
- drop_chunks
- Delete chunks by time range

### Hyperfunctions: financial analysis

- candlestick
- Transform pre-aggregated candlestick data into the correct form to use with
`candlestick_agg`

functions - candlestick_agg
- Aggregate tick data into an intermediate form for further calculation
- close
- Get the closing price from a candlestick aggregate
- close_time
- Get the timestamp corresponding to the closing time from a candlestick aggregate
- high
- Get the high price from a candlestick aggregate
- high_time
- Get the timestamp corresponding to the high time from a candlestick aggregate
- low
- Get the low price from a candlestick aggregate
- low_time
- Get the timestamp corresponding to the low time from a candlestick aggregate
- open_time
- Get the timestamp corresponding to the open time from a candlestick aggregate
- rollup
- Roll up multiple Candlestick aggregates
- open
- Get the opening price from a candlestick aggregate
- volume
- Get the total volume from a candlestick aggregate
- vwap
- Get the Volume Weighted Average Price from a candlestick aggregate
- close
- Get the closing price from an OHLC aggregate
- close_time
- Get the timestamp corresponding to the closing time from an OHLC aggregate
- high
- Get the high price from an OHLC aggregate
- high_time
- Get the timestamp corresponding to the high time from an OHLC aggregate
- ohlc
- Aggregate financial asset data into an intermediate form for further calculation
- low_time
- Get the timestamp corresponding to the low time from an OHLC aggregate
- open
- Get the opening price from an OHLC aggregate
- rollup
- Roll up multiple OHLC aggregates
- open_time
- Get the timestamp corresponding to the opening time from an OHLC aggregate
- low
- Get the low price from an OHLC aggregate

### Hyperfunctions: state tracking

- dead_ranges
- Get the down intervals from a heartbeat_agg
- downtime
- Get the total time dead during a heartbeat aggregate
- heartbeat_agg
- Create a liveness aggregate from a set of heartbeats
- interpolate
- Adjust a heartbeat aggregate with predecessor information
- interpolated_downtime
- Get the total time dead from a heartbeat aggregate and predecessor
- interpolated_uptime
- Get the total time live from a heartbeat aggregate and predecessor
- live_at
- Test if the aggregate has a heartbeat covering a given time
- live_ranges
- Get the live intervals from a heartbeat_agg
- rollup
- Combine multiple heartbeat aggregates
- uptime
- Get the total time live during a heartbeat aggregate
- interpolated_duration_in
- Calculate the total time spent in a given state from a state aggregate, interpolating values at time bucket boundaries
- duration_in
- Calculate the total time spent in a given state from a state aggregate
- into_values
- Expand a state aggregate into a set of rows displaying the duration of each state
- rollup
- Combine multiple state aggregates
- state_agg
- Aggregate state data into a state aggregate for further analysis
- duration_in
- Calculate the total time spent in a given state from a timeline aggregate
- interpolated_duration_in
- Calculate the total time spent in a given state from a timeline aggregate, interpolating values at time bucket boundaries
- interpolated_state_periods
- Get the time periods corresponding to a given state from a timeline aggregate, interpolating values at time bucket boundaries
- interpolated_state_timeline
- Get a timeline of all states from a timeline aggregate, interpolating values at time bucket boundaries
- into_values
- Expand the timeline aggregate into a set of rows, displaying the duration of each state
- rollup
- Combine multiple timeline aggregates
- state_periods
- Get the time periods corresponding to a given state from a timeline aggregate
- state_timeline
- Get a timeline of all states from a timeline aggregate
- timeline_agg
- Aggregate state data into a timeline aggregate for further analysis

### Hyperfunctions: approximate count distinct

- approx_count_distinct
- Aggregate data into a hyperloglog for approximate counting without specifying the number of buckets
- distinct_count
- Estimate the number of distinct values from a hyperloglog
- hyperloglog
- Aggregate data into a hyperloglog for approximate counting
- rollup
- Roll up multiple hyperloglogs
- stderror
- Estimate the relative standard error of a hyperloglog

### Hyperfunctions: minimum and maximum

- into_array
- Returns an array of the highest values from a MaxN aggregate
- into_values
- Returns the highest values from a MaxN aggregate
- max_n
- Find the largest values in a set of data
- rollup
- Combine multiple MaxN aggregates
- into_values
- Returns the highest values and associated data from a MaxNBy aggregate
- max_n_by
- Track the largest values and associated data in a set of values
- rollup
- Combine multiple MaxNBy aggregates
- into_array
- Returns an array of the lowest values from a MinN aggregate
- into_values
- Returns the lowest values from a MinN aggregate
- min_n
- Find the smallest values in a set of data
- rollup
- Combine multiple MinN aggregates
- into_values
- Returns the lowest values and associated data from a MinNBy aggregate
- min_n_by
- Track the smallest values and associated data in a set of values
- rollup
- Combine multiple MinNBy aggregates

### Hyperfunctions: statistical and regression analysis

- average
- Calculate the average from a one-dimensional statistical aggregate
- kurtosis
- Calculate the kurtosis from a one-dimensional statistical aggregate
- num_vals
- Calculate the number of values in a one-dimensional statistical aggregate
- rolling
- Combine multiple one-dimensional statistical aggregates to calculate rolling window aggregates
- skewness
- Calculate the skewness from a one-dimensional statistical aggregate
- rollup
- Combine multiple one-dimensional statistical aggregates
- stats_agg (one variable)
- Aggregate data into an intermediate statistical aggregate form for further calculation
- stddev
- Calculate the standard deviation from a one-dimensional statistical aggregate
- sum
- Calculate the sum from a one-dimensional statistical aggregate
- variance
- Calculate the variance from a one-dimensional statistical aggregate
- average_y, average_x
- Calculate the average from a two-dimensional statistical aggregate for the dimension specified
- corr
- Calculate the correlation coefficient from a two-dimensional statistical aggregate
- determination_coeff
- Calculate the determination coefficient from a two-dimensional statistical aggregate
- covariance
- Calculate the covariance from a two-dimensional statistical aggregate
- intercept
- Calculate the intercept from a two-dimensional statistical aggregate
- kurtosis_y, kurtosis_x
- Calculate the kurtosis from a two-dimensional statistical aggregate for the dimension specified
- rolling
- Combine multiple two-dimensional statistical aggregates to calculate rolling window aggregates
- num_vals
- Calculate the number of values in a two-dimensional statistical aggregate
- rollup
- Combine multiple two-dimensional statistical aggregates
- skewness_y, skewness_x
- Calculate the skewness from a two-dimensional statistical aggregate for the dimension specified
- slope
- Calculate the slope from a two-dimensional statistical aggregate
- stats_agg (two variables)
- Aggregate data into an intermediate statistical aggregate form for further calculation
- stddev_y, stddev_x
- Calculate the standard deviation from a two-dimensional statistical aggregate for the dimension specified
- sum_y, sum_x
- Calculate the sum from a two-dimensional statistical aggregate for the dimension specified
- variance_y, variance_x
- Calculate the variance from a two-dimensional statistical aggregate for the dimension specified
- x_intercept
- Calculate the x-intercept from a two-dimensional statistical aggregate

### Hyperfunctions: percentile approximation

- approx_percentile
- Estimate the value at a given percentile from a
`tdigest`

- approx_percentile_rank
- Estimate the percentile of a given value from a
`tdigest`

- mean
- Calculate the exact mean from values in a
`tdigest`

- num_vals
- Get the number of values contained in a
`tdigest`

- rollup
- Roll up multiple
`tdigest`

s - tdigest
- Aggregate data in a
`tdigest`

for further calculation of percentile estimates - approx_percentile
- Estimate the value at a given percentile from a
`uddsketch`

- approx_percentile_array
- Estimate the values for an array of given percentiles from a
`uddsketch`

- approx_percentile_rank
- Estimate the percentile of a given value from a
`uddsketch`

- error
- Get the maximum relative error for a
`uddsketch`

- mean
- Calculate the exact mean from values in a
`uddsketch`

- num_vals
- Get the number of values contained in a
`uddsketch`

- percentile_agg
- Aggregate data in a uddsketch, using some reasonable default values, for further calculation of percentile estimates
- rollup
- Roll up multiple
`uddsketch`

es - uddsketch
- Aggregate data in a
`uddsketch`

for further calculation of percentile estimates

### Hyperfunctions: time weighted calculations

- average
- Calculate the time-weighted average of values in a
`TimeWeightSummary`

- first_time
- Get the first timestamp from a
`TimeWeightSummary`

aggregate - first_val
- Get the first value from a
`TimeWeightSummary`

aggregate - integral
- Calculate the integral from a
`TimeWeightSummary`

- interpolated_average
- Calculate the time-weighted average over an interval, while interpolating the interval bounds
- interpolated_integral
- Calculate the integral over an interval, while interpolating the interval bounds
- last_time
- Get the last timestamp from a
`TimeWeightSummary`

aggregate - last_val
- Get the last value from a
`TimeWeightSummary`

aggregate - time_weight
- Aggregate data into an intermediate time-weighted aggregate form for further calculation
- rollup
- Combine multiple
`TimeWeightSummaries`

### Actions and automation

- add_job
- Add a job to run a user-defined action automatically
- alter_job
- Alter a job that is scheduled to run automatically
- delete_job
- Delete a job from the automatic scheduler
- run_job
- Manually run a job

### Administration

- dump_meta_data.sql
- Output metadata for support requests and bug reports
- get_telemetry_report
- Get the telemetry string that is sent to Timescale servers
- timescaledb_post_restore
- Resume normal operations after restoring a database
- timescaledb_pre_restore
- Prepare a database for data restoration

### Distributed hypertables

- add_data_node
- Add a new data node to a multi-node cluster
- alter_data_node
- Change the configuration of a data node
- attach_data_node
- Attach a data node to a distributed hypertable
- cleanup_copy_chunk_operation
- Clean up after a failed chunk move or chunk copy operation
- copy_chunk
- Copy a chunk between data nodes in a distributed hypertable
- create_distributed_hypertable
- Create a distributed hypertable in a multi-node cluster
- create_distributed_restore_point
- Create a consistent restore point for all nodes in a multi-node cluster
- delete_data_node
- Remove a data node from a database and detach it from all hypertables
- detach_data_node
- Detach a data node from one or all hypertables
- distributed_exec
- Execute a procedure across all the data nodes of a multi-node cluster
- move_chunk
- Move a chunk to a different data node in a multi-node cluster
- set_replication_factor
- Set the replication factor for a distributed hypertable
- set_number_partitions
- Set the number of space partitions for a hypertable

### Hyperfunctions: general

- approximate_row_count
- Estimate the number of rows in a table
- days_in_month
- Calculates days in month given a timestamptz
- first
- Get the first value in one column when rows are ordered by another column
- histogram
- Partition the dataset into buckets and get the number of counts in each bucket
- last
- Get the last value in one column when rows are ordered by another column
- month_normalize
- Normalize a monthly metric based on number of days in month
- time_bucket
- Bucket rows by time interval to calculate aggregates
- timescaledb_experimental.time_bucket_ng
- Bucket rows by time interval with support for time zones, months, and years

### Informational views

- timescaledb_information.chunks
- Get metadata about hypertable chunks
- timescaledb_information.compression_settings
- Get information about compression settings for hypertables
- timescaledb_information.continuous_aggregates
- Get metadata and settings information for continuous aggregates
- timescaledb_information.data_nodes
- Get information on data nodes in a multi-node cluster
- timescaledb_information.dimensions
- Get information on the dimensions of hypertables
- timescaledb_information.hypertables
- Get metadata about hypertables
- timescaledb_information.job_stats
- Get information and statistics about automatically run jobs
- timescaledb_information.jobs
- Get information about all jobs registered with the automatic scheduler
- timescaledb_experimental.policies
- Get information about all policies set on continuous aggregates

### Other

- timescaledb_information.job_errors
- Get information about background job errors

### Hyperfunctions: metric aggregation

- corr
- Calculate the correlation coefficient from values in a
`CounterSummary`

- counter_agg
- Aggregate counter data into a
`CounterSummary`

for further analysis - counter_zero_time
- Predict the time when a counter was at zero
- delta
- Calculate the change in a counter from values in a
`CounterSummary`

- extrapolated_delta
- Calculate the extrapolated change in a counter from values in a
`CounterSummary`

- extrapolated_rate
- Calculate the extrapolated rate of change from values in a
`CounterSummary`

- first_time, last_time
- Get the first and last timestamps seen by
`CounterSummary`

aggregates - first_val, last_val
- Get the first and last values seen by
`CounterSummary`

aggregates - gauge_agg
- Aggregate gauge data into a
`GaugeSummary`

for further analysis - idelta_left, idelta_right
- Calculate the instantaneous change from values in a
`CounterSummary`

- intercept
- Calculate the intercept from values in a
`CounterSummary`

- interpolated_delta
- Calculate the change in a counter, interpolated over some time period
- interpolated_rate
- Calculate the rate of change in a counter, interpolated over some time period
- irate_left, irate_right
- Calculate the instantaneous rate of change from values in a
`CounterSummary`

- num_changes
- Calculate the number of times a value changed within the time period of a
`CounterSummary`

- num_elements
- Calculate the number of points with distinct times from values in a
`CounterSummary`

- num_resets
- Calculate the total number of times a counter is reset
- rate
- Calculate the rate of change from values in a
`CounterSummary`

- rollup
- Roll up multiple
`CounterSummary`

aggregates - slope
- Calculate the slope from values in a
`CounterSummary`

- time_delta
- Calculate the difference between the start and end times from data in a
`CounterSummary`

- with_bounds
- Add bounds to a
`CounterSummary`

### Hyperfunctions: downsampling

- asap_smooth
- Downsample a time series using the ASAP smoothing algorithm
- gp_lttb
- An implementation of our lttb algorithm that will preserve gaps in the original data
- lttb
- Downsample a time series using the Largest Triangle Three Buckets method

### Hyperfunctions: frequency analysis

- approx_count
- Estimate an item's frequency from a
`count_min_sketch`

- count_min_sketch
- Aggregate data in a
`count_min_sketch`

for calculation of estimates - freq_agg
- Aggregate frequency data into a frequency aggregate for further analysis
- into_values
- Calculate all frequency estimates from a frequency aggregate or top N aggregate
- min_frequency, max_frequency
- Calculate the minimum or maximum estimated frequencies of a value from a frequency aggregate
- topn
- Calculate the top N most common values from data in a frequency or top N aggregate
- topn_agg
- Aggregate data in a top N aggregate for further calculation of most frequent values

### Hyperfunctions: gapfilling and interpolation

- interpolate
- Linearly interpolate missing values when gapfilling
- locf
- Carry the last-seen value forward when gapfilling
- time_bucket_gapfill
- Bucket rows by time interval while filling gaps in data

### Hyperfunctions: saturating math

- saturating_add
- Adds two numbers, saturating at the numeric bounds instead of overflowing
- saturating_add_pos
- Adds two numbers, saturating at 0 for the minimum bound
- saturating_mul
- Multiples two numbers, saturating at the numeric bounds instead of overflowing
- saturating_sub
- Subtracts one number from another, saturating at the numeric bounds instead of overflowing
- saturating_sub_pos
- Subtracts one number from another, saturating at 0 for the minimum bound

Found an issue on this page?

Report an issue!Keywords