News ID: 236929
Published: 0926 GMT January 06, 2019

Facebook's burnt-out moderators proof that it is broken

Facebook's burnt-out moderators proof that it is broken
theguardian.com

By John Naughton*

Despite employing a small army of contractors to monitor posts, it is clear the company is no longer fit for purpose

Way back in the 1950s, a pioneering British cybernetician, W Ross Ashby, proposed a fundamental law of dynamic systems, theguardian.com reported.

In his book ‘An Introduction to Cybernetics’, he formulated his law of requisite variety, which defines “the minimum number of states necessary for a controller to control a system of a given number of states”.

In plain English, it boils down to this: For a system to be viable, it has to be able to absorb or cope with the complexity of its environment. And there are basically only two ways of achieving viability in those terms: Either the system manages to control (or reduce) the variety of its environment, or it has to increase its internal capacity (its ‘variety’) to match what is being thrown at it from the environment.

Sounds abstruse, I know, but it has a contemporary resonance. Specifically, it provides a way of understanding some of the current internal turmoil in Facebook as it grapples with the problem of keeping unacceptable, hateful or psychotic content off its platform.

Two weeks ago, the New York Times was leaked 1,400 pages from the rulebooks that the company’s moderators are trying to follow as they police the stuff that flows through its servers.

According to the paper, the leak came from an employee who said he “feared that the company was exercising too much power, with too little oversight — and making too many mistakes”.

An examination of the leaked files, said the NYT “revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others”.

Moderators were instructed, for example, to remove fundraising appeals for volcano victims in Indonesia because a cosponsor of the drive was on Facebook’s internal list of banned groups; a paperwork error allowed a prominent extremist group in Myanmar, accused of fomenting genocide, to stay on the platform for months. And there was lots more in this vein.

Some numbers might help to put this in context. Facebook currently has 2.27 billion monthly active users worldwide. Every 60 seconds, 510,000 comments are posted, 293,000 statuses are updated and 136,000 photos are uploaded to the platform.

Instagram, which allows users to edit and share photos as well as videos and is owned by Facebook, has more than one billion monthly active users. WhatsApp, the encrypted messaging service that is also owned by Facebook, now has 1.5 billion monthly average users, more than half of whom use it several times a day.

These figures give one a feel for the complexity and variety of the environment that Facebook is trying to deal with. In cybernetic terms, its approach to date has been to boost its internal capacity to handle the variety — the torrent of filth, hatred, violence, racism and terrorist content — that comes from its users and is funneled through its servers.

In the beginning, the CEO, Mark Zuckerberg, went for the standard Silicon Valley line that there is a tech solution for every problem — artificial intelligence (AI) would do the trick — although he had to concede that the technology was not sophisticated enough to do the job just yet.

As criticism mounted (and the German Bundestag began to legislate), the company went on a massive drive to recruit human moderators to police its pages. Facebook now employs 15,000 of these wretches, the cost of whom is beginning to eat into profit margins.

Many if not most of these moderators are poorly paid workers employed by external contractors in low-wage countries such as the Philippines. They have to implement — in split seconds — the confusing guidelines that were leaked to the NYT.

One of the most useful aspects of the documents is the way they illustrate the impossibility of the task. The guidelines, said the paper, “do not look like a handbook for regulating global politics. They consist of dozens of unorganized PowerPoint presentations and Excel spreadsheets with bureaucratic titles like ‘Western Balkans Hate Orgs and Figures’ and ‘Credible Violence: Implementation standards’.”

If you want to see what this kind of work involves, then a recent documentary, The Cleaners, filmed with the cooperation of Facebook moderators in Manila, makes sobering viewing.

It shows that they have an impossible job and have to work under fierce time pressure to make their employer’s performance targets. Five seconds to make a judgment, thousands of times a day. And at the end of the shift, they go home, morally and physically exhausted.

These are the people who process Facebook’s waste so that nothing unclean appears in the news feeds of more affluent users in other parts of the world. To anyone with a moral compass, the fact that humans should have to do this kind of work so that a small elite in Silicon Valley can become insanely rich is an outrage. To a cybernetician, though, it is merely confirmation that Facebook is no longer a viable system.

 

*John Naughton is a professor of the public understanding of technology at the Open University.

 

 

 

   
KeyWords
 
Resource: theguardian.com
Comments
Comment
Name:
Email:
Comment:
Security Key:
Captcha refresh
Page Generated in 0/4844 sec