RPL in the News: "It may be neither higher nor intelligence: Religious scholars examine value, limits of AI"

March 21, 2024
The Harvard Gazette Logo

Harvard Staff Writer Liz Mineo reached out to various religion scholars to discuss the impact of AI on religion. RPL Curriculum/Advisory Committee members Matthew Ichihashi Potts and Charles M. Stang, as well as Jenn Louie, MRPL '23, were among those asked to weigh in.

 

For religious leaders, AI’s ascent raises fundamental questions about the essence of humanity. The Rev. Matthew Ichihashi Potts, M.Div. ’08, Ph.D. ’13, Pusey Minister in the Memorial Church and Plummer Professor of Christian Morals, said the question of what it means to be human has implications for ethics, moral values, and religion.

Unlike machines, human beings must reckon with suffering, illness, and death throughout their lives, Potts said. And even if AI becomes more powerful and more human-like, it won’t change that fact or even get close to providing an answer to that human quandary in the way that religion has done since the beginning of time, he added. 

“Whatever AI can tell us about how to reckon with the material reality that we’re subject to illness and death, us, humans, are still going to reckon with that,” said Potts. “That is when religion becomes significant.” 

“To me, religion is primarily about coming into some meaningful relationship to one’s own vulnerability, to the fact that we’re human, that we suffer and that we’re finite,” said Potts. “That is what religion is for — to help us come into some meaningful relationship and also, hopefully some peace, with the facts of our finitude and vulnerability.”

Faith leaders and religion scholars advise AI users to keep a healthy dose of skepticism about the technology’s powers. Large language models allow AI to generate intelligent responses, but religion scholar Charles M. Stang wonders whether the technology is capable of actual intelligence as we understand it.

“I still remain skeptical that AI is quote-unquote intelligent,” said Stang, Professor of Early Christian Thought at Harvard Divinity School. “To call our computers intelligent because they mimic a kind of intelligence we think we possess, possess exclusively, and regard as the pinnacle of intelligence strikes me as a form of technological narcissism.”

“In my experience, the tech world is prone to megalomaniac visions of the future and their role in ushering in the new age,” Stang said. “I don’t take that entirely seriously, but who knows? Maybe they will be the last ones laughing, or weeping.”

For Jenn Louie, MRPL ’23, founder of the Moral Innovation Lab and a researcher at the Berkman Klein Center for Internet & Society, AI also poses ethical problems in the way it could disseminate information with a pro-Western bias and devoid of nuances. 

“AI requires building off a compendium of written knowledge, at least for most of the generative AI stuff, so there is already a bias towards religious faiths that have a large compendium of written knowledge,” said Louie, who worked nearly two decades in the tech industry.

“If a compendium of knowledge is not written in English, which is where we’re advancing most of AI, will it always then bias us toward having morals and values that look and are very Judeo-Christian in nature? What are we doing about the practices that aren’t written in English?”

Read the full article on news.harvard.edu