Timescale Cloud: Performance, Scale, Enterprise
Self-hosted products
MST
This tutorial uses a dataset that contains second-by-second trade data for
the most-traded crypto-assets. You optimize this time-series data in a a hypertable called assets_real_time
.
You also create a separate table of asset symbols in a regular PostgreSQL table named assets
.
The dataset is updated on a nightly basis and contains data from the last four weeks, typically around 8 million rows of data. Trades are recorded in real-time from 180+ cryptocurrency exchanges.
To follow the steps on this page:
Create a target Timescale Cloud service with time-series and analytics enabled.
You need your connection details. This procedure also works for self-hosted TimescaleDB.
Time-series data represents the way a system, process, or behavior changes over time. Hypertables enable TimescaleDB to work efficiently with time-series data. Hypertables are PostgreSQL tables that automatically partition your time-series data by time. Each hypertable is made up of child tables called chunks. Each chunk is assigned a range of time, and only contains data from that range. When you run a query, TimescaleDB identifies the correct chunk and runs the query on it, instead of going through the entire table.
Hypercore is the Timescale hybrid row-columnar storage engine used by hypertables. Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). Hypercore eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities.
Hypercore dynamically stores data in the most efficient format for its lifecycle:
- Row-based storage for recent data: the most recent chunk (and possibly more) is always stored in the rowstore, ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
- Columnar storage for analytical performance: chunks are automatically compressed into the columnstore, optimizing storage efficiency and accelerating analytical queries.
Unlike traditional columnar databases, hypercore allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics—within a single database.
Because TimescaleDB is 100% PostgreSQL, you can use all the standard PostgreSQL tables, indexes, stored procedures, and other objects alongside your hypertables. This makes creating and working with hypertables similar to standard PostgreSQL.
Connect to your Timescale Cloud service
In Timescale Console
open an SQL editor. You can also connect to your service using psql.
Create a hypertable to store the real-time cryptocurrency data
Create a hypertable for your time-series data using CREATE TABLE. For efficient queries on data in the columnstore, remember to
segmentby
the column you will use most often to filter your data:CREATE TABLE crypto_ticks ("time" TIMESTAMPTZ,symbol TEXT,price DOUBLE PRECISION,day_volume NUMERIC) WITH (tsdb.hypertable,tsdb.partition_column='time',tsdb.segmentby='symbol',tsdb.orderby='time DESC');If you are self-hosting TimescaleDB v2.19.3 and below, create a PostgreSQL relational table
, then convert it using create_hypertable. You then enable hypercore with a call to ALTER TABLE.
When you have relational data that enhances your time-series data, store that data in standard PostgreSQL relational tables.
Add a table to store the asset symbol and name in a relational table
CREATE TABLE crypto_assets (symbol TEXT UNIQUE,"name" TEXT);
You now have two tables within your Timescale Cloud service. A hypertable named crypto_ticks
, and a normal
PostgreSQL table named crypto_assets
.
This tutorial uses real-time cryptocurrency data, also known as tick data, from
Twelve Data. To ingest data into the tables that you created, you need to
download the dataset, then upload the data to your Timescale Cloud service.
Unzip
to a<local folder>
.This test dataset contains second-by-second trade data for the most-traded crypto-assets and a regular table of asset symbols and company names.
To import up to 100GB of data directly from your current PostgreSQL based database, migrate with downtime using native PostgreSQL tooling. To seamlessly import 100GB-10TB+ of data, use the live migration tooling supplied by Timescale. To add data from non-PostgreSQL data sources, see Import and ingest data.
In Terminal, navigate to
<local folder>
and connect to your service.psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>"The connection information for a service is available in the file you downloaded when you created it.
At the
psql
prompt, use theCOPY
command to transfer data into your Timescale instance. If the.csv
files aren't in your current directory, specify the file paths in these commands:\COPY crypto_ticks FROM 'tutorial_sample_tick.csv' CSV HEADER;\COPY crypto_assets FROM 'tutorial_sample_assets.csv' CSV HEADER;Because there are millions of rows of data, the
COPY
process could take a few minutes depending on your internet connection and local client resources.
To visualize the results of your queries, enable Grafana to read the data in your service:
Log in to Grafana
In your browser, log in to either:
- Self-hosted Grafana: at
http://localhost:3000/
. The default credentials areadmin
,admin
. - Grafana Cloud: use the URL and credentials you set when you created your account.
- Self-hosted Grafana: at
Add your service as a data source
Open
Connections
>Data sources
, then clickAdd new data source
.Select
PostgreSQL
from the list.Configure the connection:
Host URL
,Database name
,Username
, andPassword
Configure using your connection details.
Host URL
is in the format<host>:<port>
.TLS/SSL Mode
: selectrequire
.PostgreSQL options
: enableTimescaleDB
.Leave the default setting for all other fields.
Click
Save & test
.Grafana checks that your details are set correctly.
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.