Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.
This page shows you how to import data into your Timescale Cloud service from a Parquet file.
Best practice is to use an Ubuntu EC2 instance hosted in the same region as your Timescale Cloud service as a migration machine. That is, the machine you run the commands on to move your data from your source database to your target Timescale Cloud service.
Before you migrate your data:
Create a target Timescale Cloud service.
Each Timescale Cloud service has a single database that supports the most popular extensions. Timescale Cloud services do not support tablespaces, and there is no superuser associated with a service. Best practice is to create a Timescale Cloud services with at least 8 CPUs for a smoother experience. A higher-spec instance can significantly reduce the overall migration window.
To ensure that maintenance does not run during the process, adjust the maintenance window.
- Install DuckDB on the source machine where the Parquet file is located.
- Ensured that the time column in the Parquet file uses the
TIMESTAMP
data type.
For faster data transfer, best practice is that your target service, and the system running the data import are in the same region.
To import data from a Parquet file:
Setup your service connection string
This variable hold the connection information for the target Timescale Cloud service.
In Terminal on the source machine, set the following:
export TARGET=postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=requireYou find the connection information for your Timescale Cloud service in the configuration file you downloaded when you created the service.
Create a hypertable to hold your data
Create a new empty table with a schema that is compatible with the data in your parquet file.
For example, if your parquet file contains the columns
ts
,location
, andtemperature
with typesTIMESTAMP
,STRING
, andDOUBLE
:psql $TARGET -c "CREATE TABLE <TABLE_NAME> ( \ts TIMESTAMPTZ NOT NULL, \location TEXT NOT NULL, \temperature DOUBLE PRECISION NULL \);"If you prefer using a secure UI to the command line, use Data mode in Timescale Console.
Convert the empty table to a hypertable:
In the following command, replace
<TABLE NAME>
with the name of the table you just created, and<COLUMN_NAME>
with the partitioning column in<TABLE NAME>
.psql $TARGET -c "SELECT create_hypertable('<TABLE_NAME>', by_range('<COLUMN_NAME>'))"
Setup a DuckDB connection to your service
In Terminal on the source machine with your Parquet files, start a new DuckDB interactive session:
duckdbConnect to your service in your DuckDB session:
ATTACH '<Paste the value of $TARGET here' AS db (type postgres);$TARGET
is the connection string you used to connect to your service using psql.
Import data from Parquet to your service
In DuckDB, upload the table data to your service
COPY db.<TABLE_NAME> FROM '<FILENAME>.parquet' (FORMAT parquet);Where:
<TABLE_NAME>
: the hypertable you created to import data to<FILENAME>
: the Parquet file to import data from
Exit the DuckDB session:
EXIT;
Verify the data was imported correctly into your service
In your
psql
session, or using Data mode in Timescale Console, view the data in <TABLE_NAME>SELECT * FROM <TABLE_NAME>;
And that is it, you have imported your data from a Parquet file to your Timescale Cloud service.
Keywords
Found an issue on this page?Report an issue or Edit this page in GitHub.