Startup hacks and engineering miracles from your exhausted friends at Faraday

How to read CREATE TABLE SQL with pg_query

Seamus Abshere on

pg_query is a really cool Ruby library that uses Postgres's query parser to give you programmatic access to SQL queries.

One thing you can do with it is read a CREATE TABLE statement to map column names to column types:

sql = <<-SQL  
  CREATE TABLE example (
    my_text text,
    my_int int,
    my_float float
  )
SQL

require 'pg_query'  
types = PgQuery.parse(sql).tree[0].dig('RawStmt', 'stmt', 'CreateStmt', 'tableElts').map do |c|  
  c.fetch('ColumnDef')
end.inject({}) do |memo, c|  
  memo[c.fetch('colname')] = c.dig('typeName', 'TypeName', 'names').detect { |td| td.dig('String', 'str') != 'pg_catalog' }.dig('String', 'str')
  memo
end  

The output will be:

{
  "my_text"  => "text",
  "my_int"   => "int4",
  "my_float" => "float8"
}

Thanks to Lukas Fittl for this gem!

How to generate UUID ranges for SQL queries

Seamus Abshere on

So you've got a table with a UUID primary key. How do you search for only 1/4, or 1/8, or 1/128 of it?

BETWEEN '00000000-0000-0000-0000-000000000000' AND '3fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '40000000-0000-0000-0000-000000000000' AND '7fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '80000000-0000-0000-0000-000000000000' AND 'bfffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'c0000000-0000-0000-0000-000000000000' AND 'ffffffff-ffff-ffff-ffff-ffffffffffff'  

And how do you generate those ranges? Here it is in Ruby, with n=4:

UUID_MAX = 2**128  
def to_uuid(int)  
  memo = int.to_s(16)
  memo = memo.rjust(32, '0')
  [ 8, 13, 18, 23 ].each do |p|
    memo = memo.insert(p, '-')
  end
  memo
end  
def sql_uuid_ranges(n)  
  chunk = UUID_MAX / n
  (n+1).times.map do |i|
    [chunk*i, chunk*i-1]
  end.each_cons(2).map do |bottom, top|
    a1, _ = bottom
    _, b2 = top
    "BETWEEN '#{to_uuid(a1)}' AND '#{to_uuid(b2)}'"
  end
end  
puts sql_uuid_ranges(4)  

Note that we have to do some fancy stuff with chunk*i-1 because BETWEEN is inclusive on both sides. Not that that will cause a collision any time before the heat death of the universe, but still.

Here's the same thing, with n=64:

BETWEEN '00000000-0000-0000-0000-000000000000' AND '03ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '04000000-0000-0000-0000-000000000000' AND '07ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '08000000-0000-0000-0000-000000000000' AND '0bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '0c000000-0000-0000-0000-000000000000' AND '0fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '10000000-0000-0000-0000-000000000000' AND '13ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '14000000-0000-0000-0000-000000000000' AND '17ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '18000000-0000-0000-0000-000000000000' AND '1bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '1c000000-0000-0000-0000-000000000000' AND '1fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '20000000-0000-0000-0000-000000000000' AND '23ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '24000000-0000-0000-0000-000000000000' AND '27ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '28000000-0000-0000-0000-000000000000' AND '2bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '2c000000-0000-0000-0000-000000000000' AND '2fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '30000000-0000-0000-0000-000000000000' AND '33ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '34000000-0000-0000-0000-000000000000' AND '37ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '38000000-0000-0000-0000-000000000000' AND '3bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '3c000000-0000-0000-0000-000000000000' AND '3fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '40000000-0000-0000-0000-000000000000' AND '43ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '44000000-0000-0000-0000-000000000000' AND '47ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '48000000-0000-0000-0000-000000000000' AND '4bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '4c000000-0000-0000-0000-000000000000' AND '4fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '50000000-0000-0000-0000-000000000000' AND '53ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '54000000-0000-0000-0000-000000000000' AND '57ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '58000000-0000-0000-0000-000000000000' AND '5bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '5c000000-0000-0000-0000-000000000000' AND '5fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '60000000-0000-0000-0000-000000000000' AND '63ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '64000000-0000-0000-0000-000000000000' AND '67ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '68000000-0000-0000-0000-000000000000' AND '6bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '6c000000-0000-0000-0000-000000000000' AND '6fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '70000000-0000-0000-0000-000000000000' AND '73ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '74000000-0000-0000-0000-000000000000' AND '77ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '78000000-0000-0000-0000-000000000000' AND '7bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '7c000000-0000-0000-0000-000000000000' AND '7fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '80000000-0000-0000-0000-000000000000' AND '83ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '84000000-0000-0000-0000-000000000000' AND '87ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '88000000-0000-0000-0000-000000000000' AND '8bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '8c000000-0000-0000-0000-000000000000' AND '8fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '90000000-0000-0000-0000-000000000000' AND '93ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '94000000-0000-0000-0000-000000000000' AND '97ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '98000000-0000-0000-0000-000000000000' AND '9bffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN '9c000000-0000-0000-0000-000000000000' AND '9fffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'a0000000-0000-0000-0000-000000000000' AND 'a3ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'a4000000-0000-0000-0000-000000000000' AND 'a7ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'a8000000-0000-0000-0000-000000000000' AND 'abffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'ac000000-0000-0000-0000-000000000000' AND 'afffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'b0000000-0000-0000-0000-000000000000' AND 'b3ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'b4000000-0000-0000-0000-000000000000' AND 'b7ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'b8000000-0000-0000-0000-000000000000' AND 'bbffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'bc000000-0000-0000-0000-000000000000' AND 'bfffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'c0000000-0000-0000-0000-000000000000' AND 'c3ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'c4000000-0000-0000-0000-000000000000' AND 'c7ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'c8000000-0000-0000-0000-000000000000' AND 'cbffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'cc000000-0000-0000-0000-000000000000' AND 'cfffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'd0000000-0000-0000-0000-000000000000' AND 'd3ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'd4000000-0000-0000-0000-000000000000' AND 'd7ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'd8000000-0000-0000-0000-000000000000' AND 'dbffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'dc000000-0000-0000-0000-000000000000' AND 'dfffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'e0000000-0000-0000-0000-000000000000' AND 'e3ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'e4000000-0000-0000-0000-000000000000' AND 'e7ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'e8000000-0000-0000-0000-000000000000' AND 'ebffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'ec000000-0000-0000-0000-000000000000' AND 'efffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'f0000000-0000-0000-0000-000000000000' AND 'f3ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'f4000000-0000-0000-0000-000000000000' AND 'f7ffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'f8000000-0000-0000-0000-000000000000' AND 'fbffffff-ffff-ffff-ffff-ffffffffffff'  
BETWEEN 'fc000000-0000-0000-0000-000000000000' AND 'ffffffff-ffff-ffff-ffff-ffffffffffff'  

You get the idea.

ActiveRecord ↔︎ Sequel raw sql cheat sheet

Seamus Abshere on

UPDATED with fixes from @jeremyevans

Going back and forth between ActiveRecord and Sequel can be very confusing... especially because the Sequel documentation assumes that you use "datasets" instead of raw sql.

What ActiveRecord Sequel
One value ActiveRecord::Base.connection.select_value(sql) DB[sql].get
One row ActiveRecord::Base.connection.select_row(sql) DB[sql].first
One column across many rows ActiveRecord::Base.connection.select_values(sql) DB[sql].select_map
Many rows as hashes ActiveRecord::Base.connection.select_all(sql) DB[sql].all
Quote value ActiveRecord::Base.connection.quote(value) DB.literal(value)
Quote identifier (e.g. table name, column name) ActiveRecord::Base.connection.quote_ident(ident) DB.quote_identifier(ident)

Please email us if you think of any other ones!

How to aggregate JSONB in PostgreSQL 9.5+

Seamus Abshere on

This is part of our series on PostgreSQL and things that are obvious once you see them. It's a 2017 update to our 2016 article How to merge JSON fields in Postgres Nice!

Update We used to call this jsonb_collect, but then we realized it was very similar to json_object_agg(name, value)... except that there is no version of that function that just takes an expression. So, copying jsonb_agg(expression), we give you...

How do you aggregate (aka merge aka combine aka collect) JSONB fields in Postgres 9.5+? You define a new aggregate jsonb_object_agg(expression):

CREATE AGGREGATE jsonb_object_agg(jsonb) (  
  SFUNC = 'jsonb_concat',
  STYPE = jsonb,
  INITCOND = '{}'
);

Here's how you use it:

# select * from greetings;
         data
-----------------------
 {"es":"Saludos"}
 {"en":"Hello"}
 {"ru":"Здравствуйте"}
(3 rows)

# select jsonb_object_agg(data) from greetings;
                        jsonb_object_agg
-------------------------------------------------------------
 { "es" : "Saludos", "en" : "Hello", "ru" : "Здравствуйте" }
(1 row)

If you're curious about the aggregate we just added, note that jsonb_concat(jsonb, jsonb) is the function backing || that was introduced in Postgres 9.5.

It's just that simple!

How to use Postgres cursors and multi-insert in Ruby with Sequel

Seamus Abshere on

This post is part of our PostgreSQL series.

SQL cursors are a great way to stream data out of a database without loading it all into memory.

Reading using a cursor

Sequel has use_cursor:

BATCH_SIZE = 1000  
DB[:widgets].  
  select(:id, :name, :power).
  use_cursor(rows_per_fetch: BATCH_SIZE).each do |row|
    row[:id] # whatever
  end
end  

Writing using cursors and multi-insert

Here's a real-world example - we had to copy things from widgets to widgets_copy.

We thought we were being really clever, keeping all the data inside Postgres:

# NOTE: this is not the recommended method, but it works too
batches = (desired_count.to_f / BATCH_SIZE).ceil  
batches.times do |i|  
  offset = i * BATCH_SIZE
  DB.run <<-SQL
    INSERT INTO widgets_copy (
      id,
      name,
      power,
      created_at,
      updated_at
    ) (
      SELECT
        uuid_generate_v4(),
        name,
        power,
        now(),
        now(),
      FROM widgets
      ORDER BY id
      LIMIT #{BATCH_SIZE}
      OFFSET #{offset}
    )
  SQL
end  

Even with 244gb of RAM, this sometimes stopped dead 75% of the way through. We rewrote it—pulling the data into Ruby, even—and it's both faster and doesn't stop in the middle. We're using Sequel's multi_insert.

batch = []  
now = Sequel.function :now  
uuid_generate_v4 = Sequel.function :uuid_generate_v4  
DB[:widgets].  
  select(:name, :power).
  order(:id).
  use_cursor(rows_per_fetch: BATCH_SIZE).each do |row|
    batch << row.merge!(
      id: uuid_generate_v4,
      updated_at: now,
      created_at: now
    )
    if (batch.length % BATCH_SIZE == 0) || batch.length == desired_count
      DB[:widgets_copy].multi_insert batch
      batch.clear
    end
  end
end  

Thanks to @dkastner and @devTristan who promoted these ideas internally!

3 reasons Citus is the best (non-Heroku) Postgres host available today

Seamus Abshere on

Citus Cloud is the hosted Postgres that our app uses. We tried EnterpriseDB Postgres Plus Cloud and couldn't even get going; we used Amazon RDS for a year but migrated away about 6 months ago. Heroku Postgres wasn't bare-metal enough for us.

1. Log directly to Papertrail

Screenshot of logging setup It's not a world-shattering invention, but features like sending your Postgres logs to a syslog host is very useful for debugging.

Citus takes features requests and actually implements them (this was one of mine).

RDS has a weird home-grown in-browser log viewer which is useless without things like rds-pgbadger.

2. citus extension preinstalled

You know that feeling where you're so happy you can use SQL, but you worry just a bit 🙏🏽 about your horizontal scaling story? Citus is the company behind the citus extension to Postgres:

Citus parallelizes incoming queries by breaking it into multiple fragment queries which run on the worker shards in parallel [...] The workers are simply running extended PostgreSQL servers and they apply PostgreSQL’s standard planning and execution logic to run these fragment SQL queries. Therefore, any optimization that helps PostgreSQL also helps Citus.

(from Querying Distributed Tables)

Speaking of extensions, EnterpriseDB didn't even come with PostGIS the last time we checked; to get it you needed a Windows-based (!!) "StackBuilder" tool. Support was based in India and used an ancient Salesforce interface. You get the idea.

3. Everything is negotiable

Citus gave me a great price to move over from RDS.

You're not juggling Amazon on-demand and reserved instances, or forgetting your snapshot settings and paying $2000/mo for obsolete EBS snapshot storage (did that!), or being a tiny fish in the massive AWS pond.

You're not spending a month of negotiations on the phone with EnterpriseDB "execs," paying a huge up-front fee, and then seeing the project fail utterly when your top devops engineer couldn't install a basic Postgres extension.

This is a company with people like Craig Kerstiens (helped build Heroku Postgres) on the other end of the phone, live support chat, and a real value prop.

How to create an RDS instance with Terraform

Seamus Abshere on

This post is part of our PostgreSQL series.

Terraform's RDS support makes it easy to create a database instance. Here's a cheatsheet:

resource "aws_db_instance" "mydb1" {  
  allocated_storage        = 256 # gigabytes
  backup_retention_period  = 7   # in days
  db_subnet_group_name     = "${var.rds_public_subnet_group}"
  engine                   = "postgres"
  engine_version           = "9.5.4"
  identifier               = "mydb1"
  instance_class           = "db.r3.large"
  multi_az                 = false
  name                     = "mydb1"
  parameter_group_name     = "mydbparamgroup1" # if you have tuned it
  password                 = "${trimspace(file("${path.module}/secrets/mydb1-password.txt"))}"
  port                     = 5432
  publicly_accessible      = true
  storage_encrypted        = true # you should always do this
  storage_type             = "gp2"
  username                 = "mydb1"
  vpc_security_group_ids   = ["${aws_security_group.mydb1.id}"]
}

Here's the security group you need:

resource "aws_security_group" "mydb1" {  
  name = "mydb1"

  description = "RDS postgres servers (terraform-managed)"
  vpc_id = "${var.rds_vpc_id}"

  # Only postgres in
  ingress {
    from_port = 5432
    to_port = 5432
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Allow all outbound traffic.
  egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

You can get these values from the EC2 console (don't forget them!):

variable "rds_vpc_id" {  
  default = "vpc-XXXXXXXX"
  description = "Our default RDS virtual private cloud (rds_vpc)."
}

variable "rds_public_subnets" {  
  default = "subnet-YYYYYYYY,subnet-YYYYYYYY,subnet-YYYYYYYY,subnet-YYYYYYYY"
  description = "The public subnets of our RDS VPC rds-vpc."
}

variable "rds_public_subnet_group" {  
  default = "default-vpc-XXXXXXXX"
  description = "Apparently the group name, according to the RDS launch wizard."
}

How to do histograms in PostgreSQL

Bill Morris on

This post is part of our data science and PostgreSQL series.

UPDATED FOR 2017 Now with easy subquery support and a more sensible argument order!

We adapted this excellent piece by Dmitri Fontaine and turned it into a function.

It can be invoked like this:

SELECT * FROM histogram($table_name_or_subquery, $column_name);  

. . . to give sweet results like this, in a check of the distribution of 2016 political contributions in Vermont:

fec=# SELECT * FROM histogram('(SELECT * FROM small_donors_vt LIMIT 50000)', 'transaction_amt');

 bucket |   range   | freq |       bar       
--------+-----------+------+-----------------
      1 | [0,9]     | 2744 | ******
      2 | [10,19]   | 5630 | *************
      3 | [20,29]   | 6383 | ***************
      4 | [30,39]   | 1290 | ***
      5 | [40,49]   |  369 | *
      6 | [50,59]   | 3541 | ********
      7 | [60,69]   |  174 | 
      8 | [70,79]   |  313 | *
      9 | [80,89]   |  171 | 
     10 | [90,99]   |   65 | 
     11 | [100,109] | 2363 | ******
     12 | [110,119] |   51 | 
     13 | [120,129] |  115 | 
     14 | [130,139] |   32 | 
     15 | [140,146] |   11 | 
     16 | [150,159] |  187 | 
     17 | [160,169] |   24 | 
     18 | [170,177] |   33 | 
     19 | [180,189] |   19 | 
     20 | [191,199] |   24 | 
     21 | [200,200] |  795 | **

Use it yourself by adding this to your postgres setup:

CREATE OR REPLACE FUNCTION histogram(table_name_or_subquery text, column_name text)  
RETURNS TABLE(bucket int, "range" numrange, freq bigint, bar text)  
AS $func$  
BEGIN  
RETURN QUERY EXECUTE format('  
  WITH
  source AS (
    SELECT * FROM %s
  ),
  min_max AS (
    SELECT min(%s) AS min, max(%s) AS max FROM source
  ),
  histogram AS (
    SELECT
      width_bucket(%s, min_max.min, min_max.max, 20) AS bucket,
      numrange(min(%s)::numeric, max(%s)::numeric, ''[]'') AS "range",
      count(%s) AS freq
    FROM source, min_max
    WHERE %s IS NOT NULL
    GROUP BY bucket
    ORDER BY bucket
  )
  SELECT
    bucket,
    "range",
    freq::bigint,
    repeat(''*'', (freq::float / (max(freq) over() + 1) * 15)::int) AS bar
  FROM histogram',
  table_name_or_subquery,
  column_name,
  column_name,
  column_name,
  column_name,
  column_name,
  column_name,
  column_name
  );
END  
$func$ LANGUAGE plpgsql;

Note:

  1. You don't need to use a subquery - you can also just provide a table name as the first argument.
  2. Subqueries are useful for sampling. For example, (SELECT * FROM bigtable LIMIT 500 ORDER BY RANDOM(). Don't forget parentheses!
  3. You can call this as SELECT histogram() or SELECT * FROM histogram(). The latter form is much more legible!

Happy querying!

How to preview PostGIS maps on your command line

Bill Morris on

This is part of our practical cartography and PostgreSQL series. Put a map on it!

Sometimes it's a pain to open up QGIS and load a PostGIS-enabled DB. Sometimes I don't feel like writing a custom tileserver and hooking it up to Leaflet or Mapbox GL just so I can see if my map looks right.

Sometimes I use the psql command line and a nifty tool by Morgan Herlocker called "geotype" to view my map data.

npm install -g geotype  

. . . which enables fast and simple maps like this:

ny

Yep. That's New York, alright.

dc

. . . and that sure looks like the population distribution of the District of Columbia.

These maps are nothing to show to customers, but they make QA/QC a lot easier. Here's the syntax, piping psql output directly into geotype:

psql $DB_URL -t -c "SELECT ST_AsGeoJSON(ST_Collect(the_geom)) FROM mytable" | geotype  

(The -t and ST_Collect() coerce the output into the type of data that geotype can read)

Happy mapping!

Postgres strftime (or: how to group by month)

Seamus Abshere on

This post is part of our data science and PostgreSQL series.

"How do you do strftime in postgres?"

The answer: to_char(date, format).

If you want to group by month, this is what you're looking for:

psql=> select count(*), to_char(created_at, 'YYYY-MM') from employees group by to_char(created_at, 'YYYY-MM') order by to_char(created_at,'YYYY-MM') desc;  
 count | to_char
-------+---------
    27 | 2016-08
    32 | 2016-07
    58 | 2016-06
    17 | 2016-05
    57 | 2016-04
    44 | 2016-03
    28 | 2016-02
    45 | 2016-01
    10 | 2015-12
    10 | 2015-11
    24 | 2015-10
    15 | 2015-09
    32 | 2015-08
    38 | 2015-07
    31 | 2015-06
    18 | 2015-05
    19 | 2015-04
     5 | 2015-03
     8 | 2015-02
    10 | 2015-01
     7 | 2014-12
    22 | 2014-11
(22 rows)

That's it.