Timescale Code Quick Start Guides are designed to help you integrate Timescale into your own programs. They use your favorite programming language to explain how to connect to a Timescale database, create and manage hypertables, and ingest and query data.
This quick start guide walks you through:
- Connecting to TimescaleDB
- Creating a relational table
- Creating a hypertable
- Inserting data
- Executing a query
Before you start, make sure you have:
- Installed TimescaleDB.
- Installed Go.
- Installed the PGX driver for Go.
In this section, you create a connection to TimescaleDB using the PGX driver. PGX is a toolkit designed to help Go developers work directly with PostgreSQL. You can use it to help your Go application interact directly with TimescaleDB.
Locate your TimescaleDB credentials and use them to compose a connection string for PGX.
You'll need:
- password
- username
- host URL
- port number
- database name
Compose your connection string variable as a libpq connection string, using this format:
connStr := "postgres://username:password@host:port/dbname"If you're using a hosted version of TimescaleDB, or if you need an SSL connection, use this format instead:
connStr := "postgres://username:password@host:port/dbname?sslmode=require"OptionalYou can check that you're connected to your database with this hello world program:
package mainimport ("context""fmt""os""github.com/jackc/pgx/v5")//connect to database using a single connectionfunc main() {/***********************************************//* Single Connection to TimescaleDB/ PostgreSQL *//***********************************************/ctx := context.Background()connStr := "yourConnectionStringHere"conn, err := pgx.Connect(ctx, connStr)if err != nil {fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)os.Exit(1)}defer conn.Close(ctx)//run a simple query to check our connectionvar greeting stringerr = conn.QueryRow(ctx, "select 'Hello, Timescale!'").Scan(&greeting)if err != nil {fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err)os.Exit(1)}fmt.Println(greeting)}If you'd like to specify your connection string as an environment variable, you can use this syntax to access it in place of the
connStr
variable:os.Getenv("DATABASE_CONNECTION_STRING")
Alternatively, you can connect to TimescaleDB using a connection pool. Connection pooling is useful to conserve computing resources, and can also result in faster database queries:
In this section, you create a table called sensors
which holds the ID, type,
and location of your fictional sensors. Additionally, you create a hypertable
called sensor_data
which holds the measurements of those sensors. The
measurements contain the time, sensor_id, temperature reading, and CPU
percentage of the sensors.
Compose a string that contains the SQL statement to create a relational table. This example creates a table called
sensors
, with columns for ID, type, and location:queryCreateTable := `CREATE TABLE sensors (id SERIAL PRIMARY KEY, type VARCHAR(50), location VARCHAR(50));`Execute the
CREATE TABLE
statement with theExec()
function on thedbpool
object, using the arguments of the current context and the statement string you created:package mainimport ("context""fmt""os""github.com/jackc/pgx/v5/pgxpool")func main() {ctx := context.Background()connStr := "yourConnectionStringHere"dbpool, err := pgxpool.New(ctx, connStr)if err != nil {fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)os.Exit(1)}defer dbpool.Close()/********************************************//* Create relational table *//********************************************///Create relational table called sensorsqueryCreateTable := `CREATE TABLE sensors (id SERIAL PRIMARY KEY, type VARCHAR(50), location VARCHAR(50));`_, err = dbpool.Exec(ctx, queryCreateTable)if err != nil {fmt.Fprintf(os.Stderr, "Unable to create SENSORS table: %v\n", err)os.Exit(1)}fmt.Println("Successfully created relational table SENSORS")}
When you have created the relational table, you can create a hypertable. Creating tables and indexes, altering tables, inserting data, selecting data, and most other tasks are executed on the hypertable.
Create a variable for the
CREATE TABLE SQL
statement for your hypertable. Notice how the hypertable has the compulsory time column:queryCreateTable := `CREATE TABLE sensor_data (time TIMESTAMPTZ NOT NULL,sensor_id INTEGER,temperature DOUBLE PRECISION,cpu DOUBLE PRECISION,FOREIGN KEY (sensor_id) REFERENCES sensors (id));`Formulate the
SELECT
statement to convert the table into a hypertable. You must specify the table name to convert to a hypertable, and its time column name as the second argument. For more information, see thecreate_hypertable
docs:queryCreateHypertable := `SELECT create_hypertable('sensor_data', by_range('time'));`Note
The
by_range
dimension builder is an addition to TimescaleDB 2.13.Execute the
CREATE TABLE
statement andSELECT
statement which converts the table into a hypertable. You can do this by calling theExec()
function on thedbpool
object, using the arguments of the current context, and thequeryCreateTable
andqueryCreateHypertable
statement strings:package mainimport ("context""fmt""os""github.com/jackc/pgx/v5/pgxpool")func main() {ctx := context.Background()connStr := "yourConnectionStringHere"dbpool, err := pgxpool.New(ctx, connStr)if err != nil {fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)os.Exit(1)}defer dbpool.Close()/********************************************//* Create Hypertable *//********************************************/// Create hypertable of time-series data called sensor_dataqueryCreateTable := `CREATE TABLE sensor_data (time TIMESTAMPTZ NOT NULL,sensor_id INTEGER,temperature DOUBLE PRECISION,cpu DOUBLE PRECISION,FOREIGN KEY (sensor_id) REFERENCES sensors (id));`queryCreateHypertable := `SELECT create_hypertable('sensor_data', by_range('time'));`//execute statement_, err = dbpool.Exec(ctx, queryCreateTable+queryCreateHypertable)if err != nil {fmt.Fprintf(os.Stderr, "Unable to create the `sensor_data` hypertable: %v\n", err)os.Exit(1)}fmt.Println("Successfully created hypertable `sensor_data`")}
You can insert rows into your database in a couple of different
ways. Each of these example inserts the data from the two arrays, sensorTypes
and
sensorLocations
, into the relational table named sensors
.
The first example inserts a single row of data at a time. The second example inserts multiple rows of data. The third example uses batch inserts to speed up the process.
Open a connection pool to the database, then use the prepared statements to formulate an
INSERT
SQL statement, and execute it:package mainimport ("context""fmt""os""github.com/jackc/pgx/v5/pgxpool")func main() {ctx := context.Background()connStr := "yourConnectionStringHere"dbpool, err := pgxpool.New(ctx, connStr)if err != nil {fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)os.Exit(1)}defer dbpool.Close()/********************************************//* INSERT into relational table *//********************************************///Insert data into relational table// Slices of sample data to insert// observation i has type sensorTypes[i] and location sensorLocations[i]sensorTypes := []string{"a", "a", "b", "b"}sensorLocations := []string{"floor", "ceiling", "floor", "ceiling"}for i := range sensorTypes {//INSERT statement in SQLqueryInsertMetadata := `INSERT INTO sensors (type, location) VALUES ($1, $2);`//Execute INSERT command_, err := dbpool.Exec(ctx, queryInsertMetadata, sensorTypes[i], sensorLocations[i])if err != nil {fmt.Fprintf(os.Stderr, "Unable to insert data into database: %v\n", err)os.Exit(1)}fmt.Printf("Inserted sensor (%s, %s) into database \n", sensorTypes[i], sensorLocations[i])}fmt.Println("Successfully inserted all sensors into database")}
Instead of inserting a single row of data at a time, you can use this procedure to insert multiple rows of data, instead:
Inserting multiple rows of data using this method executes as many insert
statements as there are samples to be inserted. This can make ingestion of data
slow. To speed up ingestion, you can batch insert data instead.
Here's a sample pattern for how to do so, using the sample data you generated in
the previous procedure. It uses the pgx Batch
object:
This section covers how to execute queries against your database.
Define the SQL query you'd like to run on the database. This example uses a SQL query that combines time-series and relational data. It returns the average CPU values for every 5 minute interval, for sensors located on location
ceiling
and of typea
:// Formulate query in SQL// Note the use of prepared statement placeholders $1 and $2queryTimebucketFiveMin := `SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)FROM sensor_dataJOIN sensors ON sensors.id = sensor_data.sensor_idWHERE sensors.location = $1 AND sensors.type = $2GROUP BY five_minORDER BY five_min DESC;`Use the
.Query()
function to execute the query string. Make sure you specify the relevant placeholders://Execute query on TimescaleDBrows, err := dbpool.Query(ctx, queryTimebucketFiveMin, "ceiling", "a")if err != nil {fmt.Fprintf(os.Stderr, "Unable to execute query %v\n", err)os.Exit(1)}defer rows.Close()fmt.Println("Successfully executed query")Access the rows returned by
.Query()
. Create a struct with fields representing the columns that you expect to be returned, then use therows.Next()
function to iterate through the rows returned and fillresults
with the array of structs. This uses therows.Scan()
function, passing in pointers to the fields that you want to scan for results.This example prints out the results returned from the query, but you might want to use those results for some other purpose. Once you've scanned through all the rows returned you can then use the results array however you like.
//Do something with the results of query// Struct for resultstype result2 struct {Bucket time.TimeAvg float64}// Print rows returned and fill up results slice for later usevar results []result2for rows.Next() {var r result2err = rows.Scan(&r.Bucket, &r.Avg)if err != nil {fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)os.Exit(1)}results = append(results, r)fmt.Printf("Time bucket: %s | Avg: %f\n", &r.Bucket, r.Avg)}// Any errors encountered by rows.Next or rows.Scan are returned hereif rows.Err() != nil {fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())os.Exit(1)}// use results here…OptionalThis example program runs a query, and accesses the results of that query:
package mainimport ("context""fmt""os""time""github.com/jackc/pgx/v5/pgxpool")func main() {ctx := context.Background()connStr := "yourConnectionStringHere"dbpool, err := pgxpool.New(ctx, connStr)if err != nil {fmt.Fprintf(os.Stderr, "Unable to connect to database: %v\n", err)os.Exit(1)}defer dbpool.Close()/********************************************//* Execute a query *//********************************************/// Formulate query in SQL// Note the use of prepared statement placeholders $1 and $2queryTimebucketFiveMin := `SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)FROM sensor_dataJOIN sensors ON sensors.id = sensor_data.sensor_idWHERE sensors.location = $1 AND sensors.type = $2GROUP BY five_minORDER BY five_min DESC;`//Execute query on TimescaleDBrows, err := dbpool.Query(ctx, queryTimebucketFiveMin, "ceiling", "a")if err != nil {fmt.Fprintf(os.Stderr, "Unable to execute query %v\n", err)os.Exit(1)}defer rows.Close()fmt.Println("Successfully executed query")//Do something with the results of query// Struct for resultstype result2 struct {Bucket time.TimeAvg float64}// Print rows returned and fill up results slice for later usevar results []result2for rows.Next() {var r result2err = rows.Scan(&r.Bucket, &r.Avg)if err != nil {fmt.Fprintf(os.Stderr, "Unable to scan %v\n", err)os.Exit(1)}results = append(results, r)fmt.Printf("Time bucket: %s | Avg: %f\n", &r.Bucket, r.Avg)}// Any errors encountered by rows.Next or rows.Scan are returned hereif rows.Err() != nil {fmt.Fprintf(os.Stderr, "rows Error: %v\n", rows.Err())os.Exit(1)}}
Now that you're able to connect, read, and write to a TimescaleDB instance from your Go application, be sure to check out these advanced TimescaleDB tutorials:
- Refer to the pgx documentation for more information about pgx.
- Get up and running with TimescaleDB with the Getting Started tutorial.
- Want fast inserts on CSV data? Check out TimescaleDB parallel copy, a tool for fast inserts, written in Go.
Keywords
Found an issue on this page?Report an issue or Edit this page in GitHub.