News ID: 230725
Published: 0443 GMT September 03, 2018

Robots can peer pressure kids, but don’t think for a second that we’re immune

Robots can peer pressure kids, but don’t think for a second that we’re immune
UNIVERSITY OF PLYMOUTH

By Luke Dormehl*

To slightly modify the title of a well-known TV show: Kids do the darndest things.

Recently, researchers from Germany and the UK carried out a study, published in the journal Science Robotics, that demonstrated the extent to which kids are susceptible to robot peer pressure, digitaltrends.com wrote.

TLDR version: The answer to that old parental question: “If all your friends told you to jump off a cliff, would you?” may well be “Sure. If all my friends were robots.”

The test reenacted a famous 1951 experiment pioneered by the Polish psychologist Solomon Asch. The experiment demonstrated how people can be influenced by the pressures of groupthink, even when this flies in the face of information they know to be correct.

In Asch’s experiments, a group of college students were gathered together and shown two cards. The card on the left displayed an image of a single vertical line. The card on the right displayed three lines of varying lengths. The experimenter then asked the participants which line on the right card matched the length of the line shown on the left card.

So far, so straightforward. Where things got more devious, however, was in the makeup of the group. Only one person out of the group was a genuine participant, while the others were all actors, who had been told what to say ahead of time.

The experiment was to test whether the real participant would go along with the rest of the group when they unanimously gave the wrong answer. As it turned out, most would. Peer pressure means that the majority of people will deny information that is clearly correct if it means conforming to the majority opinion.

In the 2018 remix of the experiment, the same principle was used — only instead of a group of college age peers, the ‘real participant’ was a child, aged seven to nine years old.

The ‘actors’ were played by three robots, programmed to give the wrong answer. In a sample of 43 volunteers, 74 percent of kids gave the same incorrect answer as the robots.

The results suggest that most kids of this age will treat pressure from robots the same as peer pressure from their flesh-and-blood peers.

Tony Belpaeme, professor in intelligent and autonomous control systems, who helped carry out the study, said, “The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief.

“They will play with toys and still believe that their action figures or dolls are real; they’ll still look at a puppet show and really believe what’s happening; they may still believe in [Santa Claus]. It’s the same thing when they look at a robot: They don’t see electronics and plastic, but rather a social character.”

Interestingly, the experiment contrasted this with the response from adults. Unlike the kids, adults weren’t swayed by the robots’ errors.

“When an adult saw the robot giving the wrong answer, they gave it a puzzled look and then gave the correct answer,” Belpaeme continued.

So nothing to worry about then? So long as we stop children getting their hands on robots programmed to give bad responses, everything should be fine, right? Don’t be so fast.

As Belpaeme acknowledged, this task was designed to be so simple that there was no uncertainty as to what the answer might be.

The real world is different. When we think about the kinds of jobs readily handed over to machines, these are frequently tasks that we are not, as humans, always able to perform perfectly.

This task was designed to be so simple that there was no uncertainty as to what the answer might be.

It could be that the task is incredibly simple, but that the machine can perform it significantly faster than we can. Or it could be a more complex task, in which the computer has access to far greater amounts of data than we do. Depending on the potential impact of the job at hand, it is no surprise that many of us would be unhappy about correcting a machine.

Would a nurse in a hospital be happy about overruling the FDA-approved algorithm which can help make prioritizations about patient health by monitoring vital signs and then sending alerts to medical staff? Or would a driver be comfortable taking the wheel from a driverless car when dealing with a particularly complex road scenario? Or even a pilot overriding the autopilot because they think the wrong decision is being made? In all of these cases, we would like to think the answer is ‘yes’. For all sorts of reasons, though, that may not be reality.

Nicholas Carr wrote about this in his 2014 book ‘The Glass Cage: Where Automation is Taking Us.’ The way he describes it underlines the kind of ambiguity that real life cases of automation involve, where the problems are far more complex than the length of a line on a card, the machines are much smarter, and the outcome is potentially more crucial.

*Luke Dormehl is a freelance journalist, author and public speaker, based in the UK.

   
KeyWords
 
Comments
Comment
Name:
Email:
Comment:
Security Key:
Captcha refresh
Page Generated in 0/6869 sec