Famed and world-renowned physicist Stephen Hawking makes a case for the world to come together, in the future, and form one collective government. His reasoning makes some level of sense, although some of us may not agree with him in that regard. Hawking states that AI, or Artificial Intelligence, is progressing at a rapid rate. A rate that most people don’t realize. While that’s great for humanity in many aspects, there could also pose some level of a threat too.
Hawking further explains that he doesn’t think that the super intelligent robots will necessarily eliminate us purposely, rather it would be by simply being in the way of a predetermined goal and poor programming by the human that made the robot.
H/T The Blaze
Prominent physicist says world government needed to save humanity from future robot holocaust
Famed cosmologist Stephen Hawking, who is famous for his brilliant advances in the field of theoretical physics, says that mankind may perish under the hand of artificial intelligence unless a world government is formed to protect us.
The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we need to establish a way of identifying threats quickly, before they have a chance to escalate.
“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.
“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”
Hawking opined that “some form of world government” would be needed to head off such a possibility. But, he admitted that organizing such a government has its own drawbacks, including the chance that it might turn into a dictatorship.
But that might become a tyranny. All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges.
Hawking said previously that super-smart robots might not destroy the human race because of nefarious reasons, but because we program them poorly from our own incompetence.
“The real risk with AI isn’t malice but competence,” he said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
He drew an analogy to how we treat ants, with the super-smart robots standing in for us, and the ants standing in for humanity.
You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.
In other words, future robots may not intend to destroy humanity, but may do it by accident, or because of poor design by their human creators.
Hawking is credited for advancing human knowledge in cosmology, but libertarians might reject his advice in politics given their reticence to instituting a one-world government.