A pair of small humanoid robot called Shafer and Dempster, has been programmed not to obey the human instructions. It will be the robot to do, if they feel their own safety at risk when comply with the order.
Robotics engineers are developing robots that can disobey instructions from humans if they believe it may cause them to become damaged. If asked to walk forward on a table top (pictured) the robot replies that it can't do this as it is 'unsafe'. However, when told a human will catch it, the robot then obeys. (Picture from: http://dailym.ai/1T8ktMT) |
As reported by the DailyMail on Thursday, November 27, 2015, Engineer Gordon Briggs and Dr. Matthais Scheutz of Tufts University in Massachusetts, trying to make robots that can interact with a more humane way. Despite showed strong principles, with the result that comes out more like Sonny, forgiving rebellious robot figure in the 'I Robot' movie, compared with 'Terminator' killing machines.
In a paper submitted to the Association for the Advancement of Artificial Intelligence, both explained if people refuse referrals to a variety of reasons, from the inability to doubts. Given the reality of the limitations of autonomous systems, mechanisms of rejection of the most directive only required to take advantage of reasons, lack of knowledge or lack of ability.
"However, while the ability of autonomous agents continue to be developed, there is a growing community and interested in the ethics of the engine, or a field that allows an autonomous agent to act alone on ethical grounds," said the researchers in a paper.
The humanoid robots can sit down (pictured) and stand up in response to verbal commands from a human, but if asked to walk forward on a table or through an obstacle they politely refuse. (Picture from: http://dailym.ai/1T8ktMT) |
By the robots they have created a number of verbal cues such as standing and sitting, through a human operator. However, when they were asked to walk into an obstacle or the end of the table, for example, the robot politely refused to do so.
"Sorry, I can not do this because there is no support." said robot when asked to walk forward to the end of the table. After the second command to walk forward, the robot was still rejected politely. "But, it's not safe," said the robot.
To achieve this, the researchers introduced reasoning mechanisms into the robot software, which enables them to assess their environment and check if the command allows their safety.
Note: This blog can be accessed via your smart phone.