How to investigate when a robot causes an accident

Building ‘ethical black boxes’ into robots can help us investigate untoward incidents

Accidents happen

As with any product, things can and do go wrong with robots. Sometimes this is an internal issue, such as the robot not recognising a voice command. Sometimes it’s external – the robot’s sensor was damaged. And sometimes it can be both, such as the robot not being designed to work on carpets and “tripping”.Robot accident investigationsmust look at all potential causes.

While it may be inconvenient if the robot is damaged when something goes wrong, we are far more concerned when the robot causes harm to, or fails to mitigate harm to, a person. For example, if a bionic arm fails to grasp a hot beverage, knocking it onto the owner; or if a care robot fails to register a distress call when the frail user has fallen.

Why is robot accident investigation different to that of human accidents? Notably, robots don’t have motives. We want to know why a robot made the decision it did based on the particular set of inputs that it had.

In the example of the bionic arm, was it a miscommunication between the user and the hand? Did the robot confuse multiple signals? Lock unexpectedly? In the example of the person falling over, could the robot not “hear” the call for help over a loud fan? Or did it have trouble interpreting the user’s speech?

The black box

Robot accident investigation has a key benefit over human accident investigation: there’s potential for a built-in witness. Commercial aeroplanes have a similar witness:the black box, built to withstand plane crashes and provide information as to why the crash happened. This information is incredibly valuable not only in understanding incidents, but in preventing them from happening again.

As part ofRoboTIPS, a project which focuses on responsible innovation for social robots (robots that interact with people), we have created what we call theethical black box: an internal record of the robot’s inputs and corresponding actions. The ethical black box is designed for each type of robot it inhabits and is built to record all information that the robot acts on. This can be voice, visual, or evenbrainwave activity.

We are testing the ethical black box on a variety of robots in both laboratory and simulated accident conditions. The aim is that the ethical black box will become standard in robots of all makes and applications.

While data recorded by the ethical black box still needs to be interpreted in the case of an accident, having this data in the first instance is crucial in allowing us to investigate.

The investigation process offers the chance to ensure that the same errors don’t happen twice. The ethical black box is a way not only to build better robots, but to innovate responsibly in an exciting and dynamic field.

This article byKeri Grieman, Research Associate, Department of Computer Science,University of Oxford, is republished fromThe Conversationunder a Creative Commons license. Read theoriginal article.

Story byThe Conversation

An independent news and commentary website produced by academics and journalists.An independent news and commentary website produced by academics and journalists.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

More TNW

About TNW

The tech startups shaking up construction in Europe

TNW Podcast: Generative AI, academic publishing, and European funding with Anita Schjøll Abildgaard, Iris.ai

Discover TNW All Access

Taiwan’s TSMC is planning more chip fabs in Europe

Vay secures €34M to bring remote-controlled cars to the streets of Europe