AI, startup hacks, and engineering miracles from your friends at Faraday

Geochunk: fast, intelligent splitting for piles of address data

Bill Morris on

aurora

This post is part of our practical cartography and data science series.

The problem: you want to split up a few million U.S. address records into equally-sized chunks that retain spatial hierarchy. You want to do this without anything other than a street address (geocoding is expensive!). Maybe you want to do this as part of a map/reduce process (we certainly do), maybe you want to do some sampling, who knows?

The solution: Muthaflippin' Geochunk

Anyone who's ever used U.S. ZIP codes as a way to subdivide datasets can tell you: 60608 (pop 79,607) is a totally different beast than 05851 (pop 525). They're not census tracts; it's not really appropriate to compare them statistically or thematically.

Our solution - largely the work of platform wizard and Rust enthusiast Eric Kidd - is to bake census data into a tool that does the splitting for you at a level that allows for easy comparison. More specifically:

It provides a deterministic mapping from zip codes to "geochunks" that you can count on remaining stable.

Check out the Jupyter notebook that explains the algorithm in detail, but it works like so:

Install

Install rust first if you don't have it:

curl https://sh.rustup.rs -sSf | sh

. . . then geochunk, using the rust package manager:

cargo install geochunk

. . . or install from one of the prepackaged binaries.

Use 1: Indexing

Build a table that assigns every U.S. zipcode to a geochunk that contains 250,000 people:

geochunk export zip2010 250000 > chunks_of_250k_people.csv

Use 2: List processing

Alternately, let's try a pipeline example that uses geochunk csv: say you want to parallel-process every address in the state of Colorado, and you need equal-size but contiguous slices to do it.

wget -c https://s3.amazonaws.com/data.openaddresses.io/runs/283082/us/co/statewide.zip && unzip statewide.zip
  • Pipe the full file through geochunk, into slices of about 250,000 people each:
cat us/co/statewide.csv | geochunk csv zip2010 250000 POSTCODE > statewide_chunks_150k.csv

. . . and now you have 2 million addresses, chopped into ~8 equally-sized slices with rough contiguity:

denver

Geochunk works on this scale in 1.38s (Have you heard us evangelizing about Rust yet?), leaving you plenty of time for the real processing.

This tool is serious dogfood for us; it's baked into our ETL system, and we use it to try making a tiny dent in the Modifiable Areal Unit Problem. We hope you'll find it useful too.

scrubcsv: now with null value removal

Seamus Abshere on

This is part of our series on data science because it belongs in your toolchain. Happy Null Removal!

The latest version of scrubcsv has built-in null value removal:

$ cat a.csv
name,breed,age
jerry,beagle,n/a
tater,null,1

$ scrubcsv -n 'null|n/a' a.csv
name,breed,age
jerry,beagle,
tater,,1

See how null and n/a went away?

Get the latest version with

$ cargo install scrubcsv -f

How we made our CSV processing 142x faster

Bill Morris on

This post is part of our data science hacks series

At Faraday, we've long used csvkit to understand, transform, and beat senseless our many streams of data. However, even this inimitable swiss army knife can be improved on - we've switched to xsv.

xsv is a fast CSV-parsing toolkit written in Rust that mostly matches the functionality of csvkit (including the clutch ability to pipe between modules), with a few extras tacked on (like smart sampling). Did I mention it's fast? In a standup comparison, I ran the "stats" module of XSV against "csvstat" from csvkit, on a 30k-line, 400-column CSV file:

  • Python-based csvkit chews through it in a respectable-and-now-expected 4m16s.

  • xsv takes 1.8 seconds. I don't even have time for a sip of my coffee.

The difference between csvkit and xsv is partly defined by scale; both tools are plenty fast on smaller datasets. But once you get into 10MB-and-upward range, xsv's processing speed pulls away exponentially.

If you've been using csvkit forever (like me), or if you want to be able to transform and analyze CSVs without loading them into a DB, give xsv a shot:

Install Rust

curl https://sh.rustup.rs -sSf | sh

. . . which also gives you the rust package manager cargo, which lets you:

Install xsv

cargo install xsv

Then be sure your PATH is configured correctly:

export PATH=~/.cargo/bin:$PATH

. . . and try it out on a demo CSV with 10k rows, some messy strings, and multiple data types:

curl https://gist.githubusercontent.com/wboykinm/044e2af62fc0c7f77e17f6ccd55b8fb0/raw/fca391e6c03a06a7be770fefca6c47a9acdd2305/mock_data.csv \
| xsv stats \
| xsv table

(xsv table formats the data so it's readable in the console):

field           type     sum                 min                  max                  min_length  max_length  mean                stddev
id              Integer  5005000             1                    1000                 1           4           500.49999999999994  288.6749902572106
first_name      Unicode                      Aaron                Willie               3           11                              
last_name       Unicode                      Adams                Young                3           10                              
email           Unicode                      aadamsp5@senate.gov  wwrightd8@upenn.edu  12          34                              
gender          Unicode                      Female               Male                 4           6                               
ip_address      Unicode                      0.111.40.87          99.50.37.244         9           15                              
value           Unicode                      $1007.98             $999.37              0           8                               
company         Unicode                      Abata                Zoovu                0           13                              
lat             Float    243963.82509999987  -47.75034            69.70287             0           9           24.42080331331331   24.98767816017553
lon             Float    443214.19009999954  -179.12198           170.29993            0           10          44.36578479479489   71.16647723898215
messed_up_data  Unicode                      !@#$%^&*()           𠜎𠜱𠝹𠱓𠱸𠲖𠳏       0           393                             
version         Unicode                      0.1.1                9.99                 3           14                              

Happy parsing!

scrubcsv: clean CSVs, drop bad lines

Seamus Abshere on

This is part of our series on things that are obvious once you see them - and our data science series because it belongs in your toolchain.

Lies, damn lies, and commercial CSV export modules. Who wrote these things? On what planet would this be acceptable? Whatever.

Name,What's wrong
"Robert "Bob" Smith",quotes inside quotes
Robert "Bob" Smith,quotes in the middle
Robert Bob" Smith,unmatched quote

Ruby dies immediately trying to read it:

$ irb
irb(main):001:0> require 'csv'
=> true
irb(main):002:0> CSV.read('broken.csv')
CSV::MalformedCSVError: Missing or stray quote in line 2

Introducing scrubcsv, a is a lightning-fast static binary written in Rust that best-effort parses CSV and then immediately dumps back out 100% guaranteed standards-compliant CSV. Top speed? About 67mb/s.

$ scrubcsv broken.csv > fixed.csv
4 rows (0 bad) in 0.00 seconds, 787.13 KiB/sec

$ cat fixed.csv
Name,What's wrong
"Robert Bob"" Smith""",quotes inside quotes
"Robert ""Bob"" Smith",quotes in the middle
"Robert Bob"" Smith",unmatched quote

It uses BurntSushi's world-beating CSV parser which is almost certainly faster than your SSD.

Rewinding time with rr & Rust to debug a terrible error message

Eric Kidd on

Yesterday morning, I was confronted with a truly terrible error message from a Rust program I was developing:

$ cage up --init
...
Error: error getting the project's state from Docker
ExpectedError("Object", "null")

The first part of that error message comes from cage (our open source tool for developing Docker apps with lots of microservices), which tries to be as informative as it can. The second part of the error comes from rustc-serialize, and it means I'm about to have a very bad day.

Let's track this down, fix the bug, and improve the error message as much as possible. Along the way, we'll see how to run the debugger backwards (and why you'd want to), and how to enforce informative error messages in Rust.

Preliminary investigation

Here, cage is talking directly to the Docker daemon, and it's trying to get information about the containers railshello_db_1 and railshello_web_1. You can do this manually using docker:

$ docker inspect railshello_db_1 railshello_web_1
[
    {
        "Id": "ab759d6b03d1c035ad4d6ef3b7b72427b6d2911b87be7734a8fee5c6f912fa54",
        "Created": "2016-10-25T11:57:05.845095715Z",
        "Path": "/docker-entrypoint.sh",
        "Args": [
            "postgres"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,

This continues for a total of 428 lines of JSON, most of it poorly documented. We're using rustc-serialize to automatically deserialize this JSON into Rust data structures which look like:

#[derive(Debug, Clone, RustcEncodable, RustcDecodable)]
#[allow(non_snake_case)]
pub struct ContainerInfo {
    pub AppArmorProfile: String,
    pub Args: Vec<String>,
    pub Config: Config,
    pub Created: String,
    pub Driver: String,
    // ...
    pub Mounts: Vec<Mount>,
    pub Name: String,
    pub NetworkSettings: NetworkSettings,
    pub Path: String,
    pub ProcessLabel: String,
    pub ResolvConfPath: String,
    pub RestartCount: u64,
    pub State: State,
}

We have over 160 lines of this code, and ExpectedError("Object", "null") means that we have a null somewhere, but the corresponding field is demanding a full-fledged JSON Object. In other words, I wrote something like:

    pub Config: Config,

…that says we want an actual Config, when I should have wrapped that type in Option to specify that it's allowed to be null:

    pub Config: Option<Config>,

It's days like this that make me curious about how much startup capital is needed to produce artisanal Vermont cheese. I've met some former bankers who are really into that, and they seem happy.

Try #1: Transforming JSON with jq

Whenever something goes wrong with JSON, I immediately reach for jq. This allows me to drill down into JSON data structures and transform them using the command line:

$ docker inspect railshello_db_1 | jq '.[] | .Config'
{
  "Hostname": "ab759d6b03d1",
  "Domainname": "",
  "User": "",
  "AttachStdin": false,
  "AttachStdout": false,
  "AttachStderr": false,
  ...

After about 10 minutes with jq and grep, I can't find a suspicious null anywhere. Obviously, the bug is in right in front of me, but I'm not seeing it. This usually means that either I'm looking in the wrong place, or one of my assumptions is wrong. I need better data about what's going on, or else I'll waste hours speculating.

Try #2: Time-travelling debugging!

When logic and reason fail, it's time to reach for the debugger. We have a little test program that we can use, containing the following function:

fn find_all_exported_ports() -> Result<()> {
    let docker = try!(Docker::connect_with_defaults());
    let containers = try!(docker.get_containers(false));
    for container in &containers {
        let info = try!(docker.get_container_info(&container));
        let ports: Vec<String> = info.NetworkSettings.Ports.keys()
            .cloned()
            .collect();
        println!("{}: {}", &info.Name, ports.join(", "));
    }
    Ok(())
}

Unfortunately, do you remember all those structs with declarations like this?

#[derive(Debug, Clone, RustcEncodable, RustcDecodable)]

Those declarations mean that we're going to be dealing with generated code that calls more generated code. And because Rust doesn't have exceptions, we can't tell it to "break on exception" and allow us to poke around.

This seems like a great opportunity to try out rr, which allows us to run programs backwards! Let's build our test program and record an execution trace using rr record:

$ cargo build --example findports
$ rr record target/debug/examples/findports

Now let's load that execution trace into the debugger using rr replay:

$ rr replay
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.04) 7.11.1
...
0x00007fd531b82cc0 in _start () from /lib64/ld-linux-x86-64.so.2
(rr)

From here, we can replay the crash using c (for "continue"):

(rr) c
Continuing.
Error: ExpectedError("Object", "null")

Program received signal SIGKILL, Killed.
0x0000000070000002 in ?? ()

So now that we have a crash, let's work backwards to where it all went wrong. Ideally, we want the name of JSON field which contained the unexpected null. This will be named something like "Config".

So now we need to dig through rustc-serialize/src/json.rs and find out how it handles field names. After some poking around, this looks promising:

    fn read_struct_field<T, F>(&mut self,
                               name: &str,
                               _idx: usize,
                               f: F)
                               -> DecodeResult<T> where
        F: FnOnce(&mut Decoder) -> DecodeResult<T>,

So what we want to do is look at the last couple of read_struct_field calls before the program failed, find the value of name they received, and check that against our Rust data structures.

Setting break points on Rust functions is tricky, because they have huge compound names. Fortunately, we can use rbreak to set a breakpoint using a regular expression as follows:

(rr) rbreak rustc_serialize::json::.*read_struct_field.*

This produces three pages of output informing me it set 18 breakpoints on different versions of this function. This happens because the Rust compiler creates multiple copies of "generic" functions for performance.

And now for the fun part. Backwards execution! We use rc ("reverse continue") to run the program backwards until we hit a breakpoint:

(rr) rc
Continuing.

Breakpoint 17, rustc_serialize::json::{{impl}}::read_struct_field<std::collections::hash::map::HashMap<collections::string::String, std::collections::hash::map::HashMap<collections::string::String, collections::string::String, std::collections::hash::map::RandomState>, std::collections::hash::map::RandomState>,fn(&mut rustc_serialize::json::Decoder) -> core::result::Result<std::collections::hash::map::HashMap<collections::string::String, std::collections::hash::map::HashMap<collections::string::String, collections::string::String, std::collections::hash::map::RandomState>, std::collections::hash::map::RandomState>, rustc_serialize::json::DecoderError>> (self=0x7fff57e51128, name=...,
    _idx=14, f=0x0)
    at /home/emk/.cargo/registry/src/github.com-1ecc6299db9ec823/rustc-serialize-0.3.19/src/json.rs:2227
2227	    fn read_struct_field<T, F>(&mut self,
(rr)

Now let's print out the name argument to read_struct_field:

(rr) set print pretty on
(rr) p name
$2 = {
  data_ptr = 0x5597aeec409f <str17130> "VolumesWorkingDirMountSourceDestinationModeRWPropagationBridgeEndpointIDGateway",
  length = 7
}

This is a raw Rust &str slice, pointing to 7 bytes in a longer string literal. This tells us we're looking at a field named Volumes. Some quick digging around reveals such a field in struct Config:

#[derive(Debug, Clone, RustcEncodable, RustcDecodable)]
#[allow(non_snake_case)]
pub struct Config {
    pub AttachStderr: bool,
    pub AttachStdin: bool,
    pub AttachStdout: bool,
    // ...
    pub Volumes: HashMap<String, UnspecifiedObject>,
    pub WorkingDir: String,
}

This looks like it might be our culprit! We can change this to:

    pub Volumes: Option<HashMap<String, UnspecifiedObject>>,

…and re-run our program:

$ cargo run --example findports
   Compiling docker v0.0.41 (file:///home/emk/w/src/cage/rust-docker)
    Finished debug [unoptimized + debuginfo] target(s) in 9.72 secs
     Running `target/debug/examples/findports`
/railshello_web_1: 3000/tcp
/railshello_db_1: 5432/tcp

Yay! It works! But this won't be the last time we find this problem, so let's try to improve this error message a bit.

Chaining errors in Rust

We need to look at the code calling rustc_serialize::json::decode and see if we can provide a little bit more context for this error. The relevant code is in a fork of rust-docker and it uses the excellent error-chain library to handle errors:

error_chain! {
    // Provide wrappers for other libraries' errors.
    foreign_links {
        env::VarError, EnvVar;
        hyper::Error, Hyper;
        io::Error, Io;
        rustc_serialize::json::DecoderError, Json;
    }

    // Define our own errors.
    errors {
        CouldNotConnect(host: String) {
            description("could not connect to Docker")
            display("could not connected to Docker at '{}'", &host)
        }

        // ...
    }
}

error_chain! is a giant macro that generates a bunch of error-handling code at compile time. The foreign_links section, in particular, allows us to automatically convert other libraries' errors into our own docker::Error type.

But rustc_serialize has horrible error messages, so we don't want to be allowed to automatically convert its errors into our own. We want Rust to force us to add useful information. So let's remove this line from the foreign_links block:

rustc_serialize::json::DecoderError, Json;

…and add a new custom error type to the errors block:

ParseError(wanted: &'static str, input: String) {
    description("error parsing JSON from Docker")
    display("error parsing JSON for {} from Docker", wanted)
}

Here, &'static str means wanted is a literal string slice, and String means that input is a regular owned string. We don't display input because it's huge, but we include it in the error so that our caller can log it if desired.

When re-run cargo test, we get the following errors:

error[E0277]: the trait bound `errors::Error: std::convert::From<rustc_serialize::json::DecoderError>` is not satisfied
   --> src/docker.rs:346:20
    |
346 |         let info = try!(json::decode::<ContainerInfo>(&body));
    |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ trait `errors::Error: std::convert::From<rustc_serialize::json::DecoderError>` not satisfied

This is telling us the we can longer automatically convert rustc_serialize::json::DecoderError into our own Error type. This is exactly what we wanted! Now we need to change:

let info = try!(json::decode::<ContainerInfo>(&body));

…to:

let info = try!(json::decode::<ContainerInfo>(&body));
    .chain_err(|| ErrorKind::ParseError("ContainerInfo", body));

This will wrap the json::DecoderError in our new ParseError type, giving us a slighly less awful error message:

Error: error parsing JSON for ContainerInfo from Docker
ExpectedError("Object", "null")

I should probably include a link to this blog post in that error message for the next person to hit one of these bugs! Unfortunately, we can't do much better without patching rustc-serialize.

If you maintain Rust code, please try to provide helpful, high-quality error messages with plenty of context. There are some great tools for doing this, but not all libraries use them.