Remarks from Graphic Design: Under Discussion panel

To celebrate the Cooper-Hewitt’s iteration of Graphic Design: Now in Production exhibition on Governors Island in 2012, co-curator Ellen Lupton organized a panel discussion that included Michael Bierut, Alice Twemlow, and me. I was delighted to take part. AIGA/NY has a full video of the discussion here. My comments are below. — RG

Read More

Remarks from The New School, 28 June 2012

This talk was given at the Tishman Auditorium, The New School as part of the event “Project Projects Project Projector,” sponsored by AIGA/NY. As a prompt, Adam, Prem and I were asked to speak about how our passions informed our practice. My comments about “computational poetics” (for lack of a better phrase!) follow below.

I want to start with this familiar image of Google auto-complete. It’s interesting how the web is a kind of machine for generating and organizing text — you put text in, you get more text out. And there are algorithms that structure the text output, so when you make a search, you expect something specific to happen as a result.

Here’s a website we made last year for an exhibition at Harvard that takes its name from Dante’s famous epic poem — it has a different kind of search bar.

You input text, but the field doesn’t behave as you’d expect — rather than searching the site, it searches the entire web. And rather than behaving consistently, its behavior changes, cycling through a series of searches from Google Images…

…to Wikipedia…

…to an Italian translation of your search phrase.

This isn’t anything new — machines have always changed the behavior of text, and the creation of a new tool often alters the usage of an existing one.

Read More

Hot puppy love rock Arkansas

Kottke quotes from Steven Levy’s Wired magazine article on the syntax and evolving language of search queries:

Google’s synonym system understood that a dog was similar to a puppy and that boiling water was hot. But it also concluded that a hot dog was the same as a boiling puppy. The problem was fixed in late 2002 by a breakthrough based on philosopher Ludwig Wittgenstein’s theories about how words are defined by context.

Being reasonably acquainted with Wittgenstein, I found myself wondering which of his ideas came so integrally into play in solving this problem. The Wired article only links to Wittgenstein’s Stanford Encyclopedia of Philosophy article, which includes a survey of all his major concepts and works. Was it his distinction between sense and nonsense? His arguments against a private language? His work on the connection between seeing and saying and his example of the “duckrabbit”? Or perhaps it was something he didn’t discover but simply weighed in on, like ostensive definitions or contextualism?

The strongest candidate, though, might be his concepts of language games and family resemblance. Wittgenstein’s best-known example of a language game is the “builder’s language.” Here’s how he describes it:

The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, in the order in which A needs them. For this purpose they use a language consisting of the words “block”, “pillar” “slab”, “beam”. A calls them out; — B brings the stone which he has learnt to bring at such-and-such a call.

This is a very small kit of parts; a lexicon of just four elements, combined in a certain way. But by uttering these words in the right context, a building gets built. The meaning these words have comes from their ability to activate the builder’s assistant to do what the master builder is asking. And their family resemblance has to do with this limited language, in which these words’ meaning is defined by their context and shared by the two builders. After the workday is through, the builder might look forward to how his children “beam” at him when he arrives home, and the context is entirely different.

The Wired article continues,

As Google crawled and archived billions of documents and Web pages, it analyzed what words were close to each other. “Hot dog” would be found in searches that also contained “bread” and “mustard” and “baseball games” — not poached pooches. That helped the algorithm understand what “hot dog” — and millions of other terms — meant.

A rock is a rock. It’s also a stone, and it could be a boulder. Spell it “rokc” and it’s still a rock. But put “little” in front of it and it’s the capital of Arkansas. Which is not an ark. Unless Noah is around.

Oh, and on the headline above — just my humble attempt to confuse the hell out of Google.

Serial Series, Part 2

Picture 6



pills029 copy

Above, from top (click each to enlarge): The first page of A Tale of Two Cities, from Stanford’s Discovering Dickens; a modern CAPTCHA; international ads placed as part of a Google Books settlement; an ad for Locock’s Female Pills that appeared in a Dickens serial.

In 2002 Stanford University launched a “community reading project” called Discovering Dickens, making Dickens’s novel Great Expectations available in its original part-issue format and asking Stanford alumni and other members of the Stanford community to read along, exactly as Victorians first did, with the serial version that appeared from December 1860 to August 1861. In 2004, as Discovering Dickens readers were enjoying A Tale of Two Cities, Stanford joined the newly-formed Google Print Library Project, along with the University of Michigan, Harvard, Oxford, and the New York Public Library. A year later, the program would become know as the Google Books Partner Program, or, more simply, Google Books.

At the launch of Google Books, Google’s intent was to scan and make available 15 million books within ten years. By 2008, just four years into the project, 7 million books had already been scanned. When books are scanned, words are automatically converted by Google’s Optical Character Recognition software into searchable text. Occasionally, there is a problem with this conversion process, and Google’s OCR software either can’t recognize some text or it isn’t confident about its conversion, having checked the results against standard grammar rules. The only way to convert these wayward words and phrases is to introduce human eyes into the system. This September, Google did just that with the purchase of reCAPTCHA.

Read More


As a devoted Gmail user, I really appreciated Khoi Vinh’s wonderful post that suggests a simple second look at Gmail’s spacing, leading, and alignment issues would make a world of difference. Update: Mike Bingaman has created a Greasemonkey script that does the trick in Firefox. Download it here.


This has been widely blogged by now, but Google has published their annual “Zeitgeist” report, detailing the most popular searches in a wide variety of categories. Most interesting among these (at least to me) were the "Top of Mind" searches, which take the form of “Who is,” “What is,” and “How to.” This year’s most popular? “Who is God?” “What is love?” and “How to kiss.” As of this writing, Google’s #1 result for “How to kiss” is this WikiHow article, which offers tips ranging from “Be polite and patient” to “Make sure your hair is out of your face.”