From Real Life Magazine, November 15:
You Are HereWhen maps become algorithms and algorithms become maps
Over the past decade or so, the way most of us interact with maps has been completely inverted. Once, as geographer Doreen Massey noted in For Space (2005), “the dominant form of mapping” placed “the observer, themselves unobserved, outside and above the object of the gaze.” With paper maps, this is self-evidently true: The map is already printed and doesn’t take into account your particular situation; as far as the geographic image is concerned, you are nowhere and anywhere. But with GPS-powered mapping services, this is not so unambiguous. Google Maps, for instance, locates us within our own gaze, on the map, as a blue dot directing its cone of perception in a particular direction. This doesn’t posit us as transcending the territory we survey, but the opposite: It offers a misleadingly clear representation of our precise place in a particular representation of the charted world.
But in the process, this centering of ourselves on a visible map conceals a larger and more detailed map that we don’t see, one that is constantly evolving and from which certain details are surfaced as if they should take a natural pre-eminence. What appears given as a map of “what’s really there” draws from intertwined webs of surveillance that track us in ways we can’t observe and associate us algorithmically with any number of phenomena for reasons we don’t quite understand, if we are made aware of them at all. The dot doesn’t merely show us where we are; it inscribes us into a world remade on the fly for particular gazes. What we see is in part an obscure map of our character.
This can play out in ways that surprise us. For example, when a guy on Tinder searched my somewhat unusual first name on Google Maps, rather than an anticlimactic “no results found,” the app sent him directly to the New School, my alma mater, which seemed like a striking coincidence. Even though my name appears on a few websites where “New School” also appears, it is not me and I am not it. Or am I?
We can guess why that might have happened, but we can’t know for sure, and that’s concerning. Google Maps, it turns out, is rife with associational glitches, in which people become places and places become character traits. Searching “avarice” sent me straight to the Metropolitan Museum of Art; “hostility,” a small-town doctor in Orangeburg, New York. “Drab” took me to some place called BOOM!Health Prevention Center, now permanently closed. Some of these associations stem directly from users’ contributions, intentional or not: The query “What is in energy drinks” appears on Maps as a business (as documented in this viral tweet), which Street View reveals to be a nondescript suburban home, presumably occupied by someone unfamiliar with how both Red Bull and Google work. Street View itself is full of obviously incorrectly tagged photos, some accidental, some deliberate. In 2014, a Google Maps editor approved an unknown user’s request to change the name of Berlin’s Theodor-Heuss-Platz to Adolf-Hitler-Platz, which lasted for almost 24 hours.
But many associations are inexplicable from the outside. What produces or prevents them can’t be determined. It’s hard to believe that, for example, the word ambiguity doesn’t appear on any website anywhere that’s even indirectly associated with a Maps location, yet it doesn’t pull up anything. (“Google Maps can’t find ambiguity.”) The cause of these inconsistencies is opaque; whatever programmed threshold exists for potential relevance is unclear. It’s not entirely evident what the map search bar is capable of — after all, it is not absurd to envision using it to find out where a friend is or what places are commonly associated with what emotional reactions or experiences. The technology for this already exists. The question is the degree to which it is operating behind our backs, on the map of proliferating connections and data points that is behind the map that’s generated for us to see.
While it might seem like just a whimsical experiment to search ideas in the location bar, the obscure or faulty correlations it occasionally turns up illustrate an algorithmic mystery that has broader ramifications. In a small way, it hints at the conduits that link the ubiquitous forms of surveillance we are placed under to our experience of the world and how these hidden correlations become potential vulnerabilities. At the same time, it suggests how the experience of location itself has been made more subjective, inflected less by the characteristics of a place that anyone can observe and more by the tailored search results that are presented about it.
From a user’s perspective, Google Maps is basically a geocoded version of any other Google search, only the results are displayed as points on a map rather than as a list of websites. It can handle objective and subjective queries: It can show you the location of a specific street address or tell you where the “nearest” liquor store is. Even before it is asked anything, it draws from multiple available data layers to produce a map that orients users in specific ways: certain information is presented by default (commercial locations deemed relevant to the user), certain information is obscured (the names of streets or transit stops), and certain information is entirely inaccessible.....
....MUCH MORE
Also at Real Life, November 22:
Yesterday Once MoreAlgorithms are changing how we experience nostalgia
In 2012, Joan Serrà and a team of scientists at the Artificial Intelligence Research Institute of the Spanish National Research Council confirmed something that many had come to suspect: that music was becoming increasingly the same. Timbral variety in pop music had been decreasing since the 1960s, the team found, after using computer analytics to break down nearly half a million recorded songs by loudness, pitch, and timbre, among other variables. This convergence suggested that there was an underlying quality of consumability that pop music was gravitating toward: a formula for musical virality.
These findings marked a watershed moment for the music discovery industry, a billion-dollar endeavor to generate descriptive metadata of songs using artificial intelligence so that algorithms can recommend them to listeners. In the early 2010s, the leading music-intelligence company was the Echo Nest, which Spotify acquired in 2014. Founded in the MIT Media Lab in 2005, the Echo Nest developed algorithms that could measure recorded music using a set of parameters similar to Serrà’s, including ones with clunky names like acousticness, danceability, instrumentalness, and speechiness. To round out their models, the algorithms could also scour the internet for and semantically analyze anything written about a given piece of music. The goal was to design a complete fingerprint of a song: to reduce music to data to better guide consumers to songs they would enjoy....
....MUCH MORE