In modern applications, data grows exponentially. As data gets older, it often becomes less useful in day-to-day operations. However, you still need it for analysis. Timescale elegantly solves this problem with automated data retention policies.

Data retention policies delete raw old data for you on a schedule that you define. By combining retention policies with continuous aggregates, you can downsample your data and keep useful summaries of it instead. This lets you analyze historical data - while also saving on storage.

Timescale Cloud charges are based on the amount of storage you use. You don't pay for fixed storage size, and you don't need to worry about scaling disk size as your data grows - we handle it all for you. To reduce your data costs further, combine Hypercore, a data retention policy, and tiered storage.

Timescale data retention works on chunks, not on rows. Deleting data row-by-row, for example with the PostgreSQL DELETE command, can be slow. But dropping data by the chunk is faster, because it deletes an entire file from disk. It doesn't need garbage collection and defragmentation.

Whether you use a policy or manually drop chunks, Timescale drops data by the chunk. It only drops chunks where all the data is within the specified time range.

For example, consider the setup where you have 3 chunks containing data:

  1. More than 36 hours old
  2. Between 12 and 36 hours old
  3. From the last 12 hours

You manually drop chunks older than 24 hours. Only the oldest chunk is deleted. The middle chunk is retained, because it contains some data newer than 24 hours. No individual rows are deleted from that chunk.

Keywords

Found an issue on this page?Report an issue or Edit this page in GitHub.