Startup hacks and engineering miracles from your exhausted friends at Faraday

Mountains of Census geodata for all

Bill Morris on

U.S. Census data gives our modeling a good predictive boost, and it's a robust quality assurance tool for all the third-party data we've got flowing through our wires.

The Census offers its geographic data in easy-to get, familiar formats via the TIGER portal, but distribution is split up by state for the largest datasets: blocks and block groups. There's a pretty simple reason for this: they're big. The census block shapefile for Indiana alone is 116MB compressed.

eastcoast

Ours is probably not a common use case, but we need all of the blocks and block groups in our database - merged, indexed and queryable. It took a significant amount of work to get them there, so in case anyone else needs them too, we're sharing national 2015 datasets in PostGIS dumpfile format, downloadable and ready to use here:


Census block groups

.pg_dump (426MB) | .sql (1.2GB) bg


Census blocks

.pg_dump (4.7GB) | .sql (12GB) b


Add these to your local PostgreSQL database like so:

pg_restore --no-owner --no-privileges --dbname <dbname> <filename>.pg_dump

# OR

psql <dbname> -f <filename>.sql  

To keep things simple, these are just geometries and GeoIDs (CREATE TABLE schemas can be perused here). Detailed analysis will require joining attributes separately.

Side note: I can't recommend censusreporter.org enough for census-based sanity checks.

Happy mapping!