There was a time when many thought Google was creating Skynet. Now it seems they are working closely with universities to implement safety mechanisms for if AI software needs switched off.

Like with any machine there is the emergency shut down button. But how would you shut down a machine that is programmed to overcome problems. Like fooling a human once, if the AI is intelligent enough it will learn to circumvent being fooled again.

This is the issue that these scientists face and a problem that will only become more complex as the field of AI matures.