The New Cartography
You’re right: it’s done by robots
This post is based on the impeccable research of cartographer Justin O’Beirne. Justin’s excellent article on this topic dives deep into the cartographic background. This post talks more about the tech.
Google invented a new kind of cartography to represent something that everyone has an intuitive sense of, but which was previously too elusive to easily show on maps. And to do it, they had to combine really subtle details from three different information sources, gathered in three very different ways.
There are lots of names for these spaces: downtown, business corridor, high street, or numerous others. Everyone knows them: little patches where shops and restaurants and bars all cluster together on one section of street. The variety and absence of names echoes the difficulty identifying and pinning down these areas, but they’re crucially important for the way that people relate to and navigate urban areas. A 2011 study based in San Francisco found that these zones — which Google has peculiarly decided to call “areas of interest” or AOIs — are the primary reference points that participants consistently relied on to express space and location in their home city.
Everyone understands what they are, but how do you identify an AOI on a map? Google being Google, the approach had to be automatable and had to scale well when applied to all sorts of different urban areas around the world. It turns out that identifying these zones takes all the different kinds of information that Google has.
The heart of the information is Google’s collection of points of interest (POIs), representing every shop, restaurant, bar, business, and other establishment which can be drawn on a map. Google’s POI database is incredibly comprehensive. Business owners are encouraged to add their establishments so that customers can find them on maps, and to add as much information as possible — like opening hours or menus — to increase placement in searches. Map users have an incentive to make notes when places move or close, so that they stop showing up in searches. Overall, this gives Google a regularly-updated compendium of all the businesses which might make up an AOI.
The second category of information is only available by travelling the streets themselves. Street View cameras look at every storefront on the streets they cover. Shop names can then be matched with the POI database work out exactly where that shop is located. The position of a shop’s actual storefront and entrance doesn’t always match up with historical records for their address, especially when locations are combined or split between multiple merchants. Street View gives Google an exceedingly-precise and accurate location for every business they scan.
Finally, overhead imagery from satellites or aerial surveys lets Google use machine learning to identify the physical positions of buildings and the actual layout of streets. Combining this with Street View information paints an accurate picture of where storefronts are in relation to the larger structures they occupy. Having pinned down exactly what’s where and how locations relate to the larger city, AOIs become a number-crunching problem, and one for which Google’s engineering focused culture is exceedingly well-suited.
Combining all this together gives a precise sense of where an AOI is, and makes it easy to update as these spaces gradually shift along with the rest of the urban landscape. The information that’s most-easily updated — POIs — is also what provides the clearest guidance about the drifting edges of an AOI. Once enough of the POIs in a particular zone have shifted, that area can be tasked for another pass by Street View cameras. And only rarely are buildings created or destroyed, necessitating new aerial imagery.
Google Maps now include vitally important aspect of cities, assembled through layers upon layers of machine learning applied to a disparate collection of massive data sources. Sometimes — rarely — tech is useful for something after all.