There are several different ways of ingesting your data into Managed Service for TimescaleDB. This section contains instructions to:
psqlto connect to your service. You can retrieve the service URL, port, and login credentials from the service overview in the Timescale Cloud dashboard:
psql -h <HOSTNAME> -p <PORT> -U <USERNAME> -W -d <DATABASE_NAME>
CREATE DATABASE new_db; \c new_db;
CREATE TABLE conditions ( time TIMESTAMPTZ NOT NULL, location text NOT NULL, temperature DOUBLE PRECISION NULL );
CREATE EXTENSION timescaledb; \dx
SELECT create_hypertable('conditions', 'time');
When you have successfully set up your new database, you can ingest data using one of these methods.
If you have a dataset stored in a
.csv file, you can import it into an empty
TimescaleDB hypertable. You need to begin by creating the new table, before you
import the data.
timescaledb-parallel-copytool. You should already have the tool installed, but you can install it manually from our GitHub repository if you need to. In this example, we are inserting the data using four workers:
timescaledb-parallel-copy --connection '<service_url>’ --table conditions --file ~/Downloads/example.csv --workers 4 --copy-options "CSV" --skip-header
timescaledb-parallel-copytool, or if you have a very small dataset, you can use the PostgreSQL
psql '<service_url>/new_db?sslmode=require' -c "\copy conditions FROM <example.csv> WITH (FORMAT CSV, HEADER)"
You can use a client driver such as JDBC, Python, or Node.js, to insert data directly into your new database.
See the PostgreSQL instructions for using the ODBC driver.
See the Code Quick Starts for using various languages, including Python and node.js.
If you have data stored in a message queue, you can import it into your TimescaleDB database. This section provides instructions on using the Kafka Connect PostgreSQL connector.
This connector deploys PostgreSQL change events from Kafka Connect to a runtime service. It monitors one or more schemas in a TimescaleDB server, and writes all change events to Kafka topics, which can then be independently consumed by one or more clients. Kafka Connect can be distributed to provide fault tolerance, which ensures the connectors are running and continually keeping up with changes in the database.
You can also use the PostgreSQL connector as a library without Kafka or Kafka Connect. This allows applications and services to directly connect to TimescaleDB and obtain the ordered change events. In this environment, the application must record the progress of the connector so that when it is restarted, the connect can continue where it left off. This approach can be useful for less critical use cases. However, for production use cases, we recommend that you use the connector with Kafka and Kafka Connect.
See these instructions for using the Kafka connector.
Found an issue on this page?Report an issue!