0147 GMT September 22, 2019
As the risks of deep learning’s continued evolution have received greater attention, a growing refrain has focused on how best to prevent artificial intelligence (AI) from being used for harm. From killer robots to prevalent facial recognition, societies are increasingly talking about the need for new legislation and corporate responsibility pledges to halt the spread of harmful AI. Unfortunately, the reality is that deep learning’s ease of use and decentralized development across the world means it is simply impossible to constrain how it is used. Instead, societies must focus on how to counteract its most harmful applications.
The public, press, pundits and policymakers speak of laws and pledges to halt the harmful use of AI. Yet “AI” is not a monolithic singular algorithm. It refers to a broad class of machine learning techniques that are being developed by researchers all across the world.
Arguing that we must pass legislation banning “harmful AI” is akin to arguing that we must ban “harmful statistics.” Just as we cannot stop the harmful use of mathematics, we cannot stop the misuse of deep learning techniques given that no single company, government or organization controls the use of deep learning or the broader field of mathematics from which it stems.
Legislation targeting specific societally harmful applications of AI are also of limited utility given the dual nature use of most AI innovations. At first glance an outright ban on facial recognition might seem reasonable until one realizes that this would also ban face-based biometric phone unlocking.
A major terror attack in which the perpetrator was well-known and captured clearly on surveillance camera but was missed due to a ban on facial recognition would also likely rapidly reverse any such bans. Indeed, many of the European nations that once fiercely condemned US digital surveillance efforts have rushed to adopt those very same measures in the face of increased terrorist threats.
Bans on “killer robots” might similarly seem quite reasonable until one realizes that driverless cars and a package delivery drones are merely killer robots in waiting.
AI systems determining judicial outcomes with the power to literally incarcerate or put to death a human being might at first glance seem beyond the pale until one realizes just how biased and capricious today’s human-based judicial system really is and how arbitrary and evidence-free its decisions can be.
AI-powered robotic factory and warehouse workers will displace jobs and cause mass upheaval. At the same time, they will eliminate inhuman working conditions and create new job opportunities.
AI-powered scams, cyberattacks and falsehoods like “deep fakes” will be increasingly difficult to spot. At the same time, AI-powered anti-fraud, cyber-defense and summarization algorithms will help us see past the falsehoods that already deluge our digital world.
In short, AI is not a singular centralized technology that can be regulated or controlled. It is an abstract term for a decentralized field of study being built by researchers all across the world. Many countries with advanced AI development communities have very different perspectives on the deployment of AI-powered weaponry, meaning that even if the US and Europe ban broad swaths of AI applications as immoral and unethical, such bans will carry little importance with the rest of the world that will be rapidly rolling out said applications.
Most AI applications are also dual-use in which any positive application can be repurposed for harm and vice-versa, meaning it is not obvious what specific constraints would have meaning even if codified into law.
In the end, we must accept that we cannot stop harmful applications of deep learning and instead must focus our efforts on countering its impacts.
* Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. He is one of Foreign Policy Magazine's Top 100 Global Thinkers of 2013. This article was first published in Forbes magazine.