fb-pixelA call for ethical use of artificial intelligence - The Boston Globe Skip to main content
OPINION

A call for ethical use of artificial intelligence

AI machines learn from our data and are trained by us. So looking at them is like looking in the mirror.

"Jeopardy!" champions Ken Jennings (left) and Brad Rutter look on as an IBM computer called "Watson" beats them to the buzzer to answer a question during a practice round of the show in January 2011.Seth Wenig/AP/file

As someone educated in science and engineering, I’ve always considered the pursuit of new technologies a higher calling. As someone raised Roman Catholic, I also tend to pay attention when another high call comes in — like from the Vatican.

Last year, the Vatican reached out to our company, IBM. Pope Francis was worried about technology’s effects on society and families around the world and its potential to widen the gap between the rich and poor. Of particular concern: artificial intelligence, the technology most adept at mimicking the best and worst of human qualities. How could the world harness AI for the greater good while reducing its potential to be a force for evil?

The leader of the world’s 1.3 billion Roman Catholics had directed his Pontifical Academy of Life to study the problem. On Feb. 28, in Rome, came the results of that effort: IBM, along with the United Nations and the Vatican, signed a papal call for ethics in the use of AI.

IBM has never before signed onto a papal call. But these are not ordinary times in technology. Despite the true good that can come of responsible use of AI — like improving medical understanding and treatments or making all sorts of human activity less toilsome, more efficient, and kinder to the natural environment — the world has also seen what happens when bad actors use the technology for nefarious purposes. That’s when political trolls generate fake news that’s hard to distinguish from the real thing. Or companies monetize personal data for their own selfish purposes. Or authoritarian governments use facial recognition and other forms of AI for Big Brother surveillance.

And so private and public institutions urgently need to put guardrails around technology like AI. That not only includes ethical guidelines, like the ones the Vatican is calling for, but also legally binding regulatory guidelines — like the precision regulation for AI that IBM and others have recently proposed. And it includes efforts like the recent European Union white paper on regulating AI, which IBM also supports.

Advertisement



Machines aren’t bad. And there is nothing inherently evil about AI. The machines that humans create simply reflect who we are as people and a society. AI machines learn from our data and are trained by us. So looking at them is like looking in the mirror. The question then is how such machines are used. That’s a matter of human choice — of how people can and should regulate those machines.

The Vatican document calls for international cooperation in designing and planning AI systems that the world can trust — for reaching a consensus among political decision-makers, researchers, academics, and nongovernmental organizations about the ethical principles that should be built into these technologies.

But we at IBM don’t think this call to action should stop with the Vatican. Leaders of all the world’s great religions, as well as right-minded companies, governments, and organizations everywhere, should join this discussion and effort.

IBM has decades of experience with AI. In 1997, our technology was advanced enough that our Deep Blue computer beat the chess grandmaster Garry Kasparov. Fourteen years later, our Watson AI system was able to amass, analyze, and learn enough from a vast trove of human knowledge that it won the TV quiz show “Jeopardy!,” answering complex questions in human, natural language.

Advertisement



Now, approximately a decade later, AI has achieved awesome — and potentially awful — capabilities. That’s why IBM believes that any time a company or organization is using AI, the user should be notified. It’s also why, despite AI’s humanlike capabilities, there should also be a real human being who makes the final determination. That should be as true when a doctor determines a patient’s course of treatment as when a military leader decides when the AI-enabled weapon is used on the battlefield.

Against this technological and human backdrop, the skeptic might ask: What’s the value of signing on to a Vatican-led commitment to the ethical use of AI? Ideally, the papal call has the power of any public profession of faith — a commitment to pursue the greater good, even as everyone recognizes that human beings are not infallible.

John E. Kelly III is executive vice president at IBM.

Have a point of view about this? Write a letter to the editor; we’ll publish a select few. (We’re experimenting with alternatives to the comment section for creating online conversation at Globe Opinion over the next month; you can let us know what you think of our experiments here.)