News ID: 234428
Published: 0811 GMT November 18, 2018

Computers have learned to make us jump through hoops

Computers have learned to make us jump through hoops

By John Naughton*

The other day I had to log in to a service I had not used before.

Since I was a new user, the website decided that it needed to check that I was not a robot and so set me a Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart), reported.

This is a challenge-response test to enable a computer to determine whether the user is a person rather than a machine.

I was presented with an image of a roadside scene over which was overlaid a grid. My ‘challenge’ was to click on each cell in the grid that contained a traffic sign, or part thereof.

I did so, fuming a bit. Then, I was presented with another image and another grid — also with a request to identify road signs. Like a lamb, I complied, after which the website deigned to accept my input.

And then the penny dropped (I am slow on the uptake). I realized that what I had been doing was adding to a dataset for training the machine-learning software that guides self-driving cars — probably those designed and operated by Waymo, the autonomous vehicle project owned by Alphabet Inc. (which also happens to own Google). So, to gain access to an automated service that will benefit financially from my input, I first have to do some unpaid labor to help improve the performance of Waymo’s vehicles (which, incidentally, will be publicly available for hire in Phoenix, Arizona, by the end of this year).

Neat, eh? But note also the delicious additional irony that the Captcha is described as an ‘automated Turing test’. The Turing test was conceived, you may recall, as a way of enabling humans to determine whether a machine could respond in such a way that one could not tell whether it was a human or a robot. So we have wandered into a topsy-turvy world in which machines make us jump through hoops to prove that we are humans!

The strangest aspect of this epochal shift is how under-discussed it has been. The metaphor of the boiling frog comes to mind, according to which if the creature is put suddenly into boiling water, it will jump out; but if it is put in tepid water which is then brought to a boil slowly, it will not perceive the danger and will be cooked to death. As it happens, zoologists think that frogs are generally smarter than the metaphor supposes. The question, though, is whether humans are equally smart: Have we become so subtly conditioned by digital technology that we do not see what has been happening to us? Have we been conditioned to accept a world governed by ‘smart’ tech, trading convenience and cheap bliss to the point where we become a bit like machines ourselves?

In a recent startling and thoughtful book, two scholars — Brett Frischmann, a law professor, and Evan Selinger, a philosopher — argue that the answer to that question is ‘yes’. The book’s title, ‘Re-Engineering Humanity’, succinctly summarises their case. It is an exploration of how everyday practices — such as clicking to accept an app’s legal terms — are made so simple that we are effectively ‘trained’ to not read the contents. Unless things change, the dominance of digital technology means that, over time, humans will lose their capacity for judgment, discrimination and self-sufficiency.

The carefully designed opacity of online end-user license agreements (EULAs) provides an illuminating case study. These are, Frischmann said in an interview, “optimized to minimize transaction costs, maximize efficiency, minimize deliberation, and engineer complacency”, designed to “nudge people to click a button and behave like simple stimulus-response machines”. However, the ‘efficiency’ thus obtained is not for humans, but for the machine behind the ‘accept’ button.

“Seamless and friction-free are great optimization criteria for machines, not for humans,” said Frischmann.

“After all, machines are tools that serve human ends. Machines don’t set their objectives; humans do — or so we hope. To author our lives and not just perform scripts written by others, we need to sustain our freedom to be free from powerful techno-social engineering scripts.”

He is right. And there is nothing technophobic about that. In a way, ‘Re-Engineering Humanity’ gives a book-length endorsement of the media scholar John Culkin’s oft-repeated insight that “we shape our tools and then our tools shape us”. Technology, as Frischmann said, is supposed to provide tools that serve human ends. But, as the machine-learning Captcha (not to mention the business models of Google and Facebook) demonstrate, a significant proportion of digital tech now sees (and uses) humans as means to ends that are not ours. In the process, they reduce us to the status of cheery rats running on treadmills designed by people who do not have our interests at heart.

So back to the frog metaphor. Are we smart enough to jump out before it is too late? You do not even have to Google it to know the answer.


* John Naughton is a professor of the public understanding of technology at the Open University.



Security Key:
Captcha refresh
Page Generated in 1/1983 sec