Scoop has an Ethical Paywall
Licence needed for work use Learn More

Video | Business Headlines | Internet | Science | Scientific Ethics | Technology | Search

 

UC Researchers Ask: Is Robot Abuse Immoral?

In a new study, University of Canterbury (UC) researchers have found that participants considered abusive behaviour towards a robot just as immoral as abusive behaviour towards a human.

The paper titled ‘The morality of abusing a robot’, by Associate Professor Christoph Bartneck and PhD student Merel Keijsers of the Human Interface Technology Lab New Zealand | Hangarau Tangata, Tangata Hangarau (HIT Lab NZ) in UC’s College of Engineering, was recently published in the Paladyn, Journal of Behavioural Robotics.

“It’s not uncommon for humans to exhibit abusive behaviour towards robots in real life,” Associate Professor Bartneck says. “Our research looks at how abusive behaviour towards a human is perceived in comparison with identical behaviour towards a robot.”

Participants were shown 16 video clips that depicted different levels of violence and abuse towards a human and a Boston Dynamics Atlas robot. The robot in the video was computer-generated imagery (CGI), its motions created by a human actor. As a result, there were two versions of a video with identical abusive behaviours – one where the victim was a human and one where it was a robot.

Advertisement - scroll to continue reading

“We found that participants saw bullying of humans and robots as equally unacceptable, which is interesting because a robot doesn’t have feelings and can’t experience pain – it doesn’t even understand the concept of abuse.

“It doesn’t make sense from a logical point of view,” says Associate Professor Bartneck. “It’s very interesting in the sense that if we treat robots as if they are humans, we consider it immoral to mistreat them.”

However, the findings were different when participants were shown footage of a human fighting back in response to being bullied in comparison to a robot fighting back in the same situation. Humans were seen as less immoral compared with robots when fighting back.

“As soon as the victim fought back in response to the abuse, there was a big difference. A human fighting back in that situation was considered as more acceptable, but a robot fighting back in the same situation was not considered as acceptable behaviour,” Keijsers says.

“We did further analysis to explain this difference and found that the participants interpreted the robot’s response as a lot more aggressive or abusive than the human’s response – they felt there was a higher intent to harm.”

One explanation for this, the researchers suggest, is that when a robot fights back or resists, there is a change in power.

“Robots are very much meant to work and serve, so they may be viewed as sub-ordinate, but when a robot is not obedient or gets aggressive it’s viewed as inappropriate,” Keijsers says.

She points out that another explanation for this could be due to robots being portrayed in media as a potential threat – especially in those blockbuster movies where robots ‘rise up’ against their masters or enslave humanity.

“Right now we don’t have a lot of robots in society but that’s set to change. It’s only a matter of time. This research lays the foundations for a society in which we can have robots around – we have to figure out how we will interact with them,” says Keijsers.

The HIT Lab NZ is a multi-disciplinary research laboratory at UC that focuses on how people interact with technology. There are currently several openings available for postgraduate studies in the area of human-robot Interaction that focus on the ethics of human-robot relationships and AI. Students interested in postgraduate studies at the HIT Lab NZ should contact info@hitlabnz.org

Christoph Bartneck is an associate professor and director of postgraduate studies at the HIT Lab NZ at the University of Canterbury. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of human-robot interaction, science and technology studies, and visual design. More specifically, he focuses on the effect of anthropomorphism on human-robot interaction. He has worked for several international organisations including the Technology Centre of Hanover (Germany), LEGO (Denmark), Eagle River Interactive (USA), Philips Research (Netherlands), ATR (Japan), and The Eindhoven University of Technology (Netherlands).

Merel Keijsers is completing her PhD at the HIT Lab NZ, University of Canterbury. From the Netherlands, she has a research Master’s degree in Statistics, and in Social and Health Psychology from Utrecht University. In her PhD, she studied what conscious and subconscious psychological processes drive people to abuse and bully robots. Having a background in social psychology, she is mainly interested in the similarities and differences in how people deal with robots versus other humans. Merel has accepted a position as an assistant professor at John Cabot University in Rome, Italy which she will start in 2021.


The researchers would like to acknowledge Corridor Digital who made the stimuli available for this research.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Business Headlines | Sci-Tech Headlines

 
 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.