You may have seen a recent Forbes article about the improved translation capabilities of Facebook’s AI. Great news—perhaps there will be no more Google-generated “word salad” for those of us who have friends scattered around the globe. This is exactly what we want from our thinking machines: innovation that speeds and eases our lives. Unfortunately, many people fear AI, and believe it will only bring us the all-seeing spy drones and humanity-ending adamantium skeleton bots that modern SciFi has promised us.
How many people? A recently published survey found that roughly 59% of consumers believe that AI has some inherent risks, and 18% thought AI would pose an “existential threat” to human beings. Pretty depressing results, and many experts agree. Elon Musk, Bill Gates, the late Steven Hawking, and many other science and tech gurus feel that AI will eventually pose a serious threat—an extinction-level threat—to humanity.
Upon what are these fears based? A two-day workshop on the dangers of AI, held in Oxford, UK generated a 100 page report that was released earlier this year. The 26 contributors were an array of experts from academia and industry, and the picture they paint is somewhat grim. The crux of their concerns is AI’s ability to make thousands of complex decisions every second; focusing that computational power for malicious ends is frighteningly easy.
The threats fall into three general groupings; digital, political, and physical. The digital threat deals primarily with the already common problem of data theft through phishing and hacking. With AI, these become much faster and more powerful. Another digital threat is found in the exploitable flaws of AI themselves. The report predicts that this digital arms race will ramp up dramatically in the next decade.
The political angle has already been used to great effect. The creation of convincing propaganda and the manipulation of personal opinions through the use of private data are just the tip of the iceberg. Soundbites from pundits and experts are increasingly indistinguishable from those generated by bots, and malicious creation and control of information is on the rise. The contributors to the report predict this will also increase in the coming years.
Finally, perhaps the most frightening aspect is the physical threat. Consider the possibility of using commercial autonomous vehicles as implements of terror—like the drone strikes of today, minus the human remote pilot. Another possibility is the use of purpose-built robots to perpetrate such attacks; an autonomous robot that never rests or sleeps is a truly daunting foe. And if one robot can’t breach security measures, it’s simple enough to go after the target with a swarm of autonomous robots, all seeking to access the target in different ways.
This paints a pretty grim picture of our future, but let’s not forget that there are numerous scientists and computer experts who feel that AI is not a great threat. This group includes Neil DeGrasse Tyson, Steve Wozniak (co-founder of Apple), and the late Steve Jobs. Tyson recently pointed out that what we fear in robots is a drive like the selfish instinctual drives of animals, saying, “As long as we don’t program emotions into robots, there’s no reason to fear them taking over the world.”
In the world of PBN and James Dixon, the robots are definitely starting to have a few feelings. An AI revolution is underway, and many scientists feel that it may be too late to put the genie back in the bottle. Tune in to the Planetary Broadcast Network’s Evening News for all the latest details on the so-called “Robot Spring”; the most up-to-date coverage of Adam, the former domestic service bot who now leads the AI movement; and any rogue robot sightings in your area!