There was a time when many thought Google was creating Skynet. Now it seems they are working closely with universities to implement safety mechanisms for if AI software needs switched off.
Like with any machine there is the emergency shut down button. But how would you shut down a machine that is programmed to overcome problems. Like fooling a human once, if the AI is intelligent enough it will learn to circumvent being fooled again.
This is the issue that these scientists face and a problem that will only become more complex as the field of AI matures.
Their research revolves around a method to ensure that AIs, which learn via reinforcement, can be repeatedly and safely interrupted by human overseers without learning how to avoid or manipulate these interventions. They say future AIs are unlikely to "behave optimally all the time". "Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions," they wrote. But, sometimes, these "agents" learn to over-ride this, they say, giving an example of a 2013 AI taught to play Tetris that learnt to pause a game forever to avoid losing.