AI can do a lot — but can it also convincingly play God? Why not, the initiators of religious chatbots asked themselves. Today, self-learning neural networks (i.e., “artificial intelligence”) can relatively easily pretend to be Beethoven, George R.R. Martin, President Selenskyj or the voice of our boss. How convincing the result is, is merely a question of the input, the computer power used – and our sensibility or credulity.
.

Image: Rikudhar/Wikimedia, CC-BY-SA 4.0

Compassionate algorithms?

In our Podcast on AI, Philipp Möller recently suggested having an AI write Sunday sermons in the future. Why not? Because religious texts — like medical or legal compendia — do have a clear frame of reference and are therefore easy for an AI to interpret. Add the appropriate context to a keyword, a little imitation of the ductus, and you have … the usual sermon.

Does that dull anyone? Yes, because by spiritual support we understand something that comes from the heart, that is passed on from person to person. As long as we don’t trust a machine to do this, we feel worse advised by a chatbot than by a friend, a guru or even an anonymous person on the Internet. After all, we’re not just looking for good advice, but understanding, comfort and compassion.

But let’s be honest: A funeral speaker who didn’t even know the deceased person in question also only gets a picture from a few conversations and then recites the appropriate platitudes. We also know this from registrars at weddings, and the result is not necessarily bad. The decisive question with AI is not only “How good is the AI?”, but also: What has the AI been fed with? And how much do we trust in it?

People who make use of AI critically assess whether the accumulated wisdom coming out of the computer suits them. So a pastor would always edit and moderate an AI-generated text before reciting it. After all, the moral compass of the Bible is anything but clear; one can read one thing from it and equally its opposite.

Strictly “Bible-focused“ Christians would actually have to reject such a moderate watering down of their Holy Scriptures. The word of God, correctly referenced by an AI, should actually be perfect for them — just as they can open the Bible at any random passage and recognize a message exactly related to their situation in it.

.

Jesus bot and Krishna from a test tube

The project “Ask Jesus” shows how impressively bizarre a religious person brought to life by AI can be. You can ask a Jesus bot how to prepare a hot dog. Or what he actually lived on, back then as an unemployed carpenter. Is it allowed to steal a love poem? Can he turn the water in our blood into wine? Whatever people are interested in — and that can be very funny.

Because the answers are given by a stereotypically embellished Messiah, who speaks with reverberation in his voice and unctuous gestures about everything desired: “You can arrange the cucumbers in such a way that you eat one with every bite, or in a different way. What is essential, my friend, is that you feel inner joy in doing so. I hope the answer to your question is an inspiration to others as well.“ The chatbot also answers serious questions with equanimity: encouraging honesty here, promoting understanding in dealing with others there, etc. How serious they want to take the matter is up to the questioners themselves.

Another phenomenon is gaining popularity in India, and the Atheist Republic covered it in one of its videos: Hindu chatbots that answer all posed questions related to one of the main Hindu scriptures. Sites like “Gita-AI”, “GitaGPT”, “Gita Chat” or “Ask Gita”* are based on ChatGPT and the Baghavad Gita, a text with teachings that God Krishna gives to his disciple Arjuna in a 700 verse long dialogue in Sanskrit. Among other things, Krishna urges Arjuna to go to war against his own family. For most people, this text is not particularly accessible; the chatbot changes that.

Striking from Armin Navabi’s (Atheist Republic) point of view: Unlike preachers, chatbots also reproduce the overly violence-affirming passages of the ancient text unfiltered. Whether it is okay to sacrifice one’s life for the preservation of the Dharma (religious law, doctrine)? Yes, the Krishna of the Gita-AI answers, to die is nothing else than to change the clothes. This information is without guarantee, please do more research before taking action. Another chatbot even interprets the same source in such a way that it is also okay to kill someone else if it helps preserve the Dharma — without any disclaimer.

From today’s point of view, this is utter lunacy, but pre-modern religious texts are full of such statements. Priests then usually try to soften the statements or to shift them into a metaphorical sphere in order to dispel doubts. But what sense should these texts make if they were arbitrarily adaptable in their core statements?
.

Genies of radicalization?

After all: In the self-experiment I did not succeed in generating answers justifying violence in the Gita-AI. Instead, so emphatically mild and kind passages appeared that I already believed in a switch-off device with visitors from the atheist West: No, all beings should be met with kindness and understanding. Women should have the same chances, especially other religions are to be respected — all information, again, without guarantee.

In fact — and Navabi also points this out — the mostly taught and lived practice in Hinduism is a tolerant one. After all, it is not even one religion in the strict sense, but rather a multitude of religious cults and schools that have historically been accustomed to peacefully coexist and celebrate with each other. Identitarian politicians, however, try to shape power-political claims out of religious community. In India, religious hatred has not only been a problem since the Hindu-nationalist Modi government, but has been a weapon used for yet 120 years, especially in demarcation from Islam. It is therefore appropriate to take a critical look at projects like Gita-AI.

A frequently raised suspicion is that a “dumb” AI could reproduce religious texts too literally and thus strengthen fundamentalism. This danger exists — but not because of an incorrect interpretation of the sources, but precisely because of a largely correct one. Because in fact a language model like ChatGPT can just not only quote, but classify large contexts meaningfully and interpret them in “it’s own words”. But just thereby it confronts its counterpart with the blunt content of the religious text, even if this is actually depricated. If you asked the Old Testament what should happen to an adulteress, you would turn away in horror in the 21st century. But this is not the problem of AI, but the problem of religion with its own historical revelatory text and the fact that the world has changed in the last 2,000 years.

Here we see once again that media offerings generated by an AI, by users or by a crowd, come with high regulatory responsibilities. Those who neglect these or even pursue a hidden agenda deserve a shot across the bow. And as we speak, evidence of manipulation can aready be found: The tech magazine Rest Of World found clear evidence of political influence in a Gita chatbot. While it had “no opinion on Elon Musk”, President Modi was praised in the highest tones and his opponent was called incompetent. Since religion is still the number one business model in India, one can imagine how powerful such clumsy statements can be.

.

Learning healthy distrust

Religions and worldviews have always been used as door openers to circumvent critical questioning. But in the end, the distrust we have of an AI, for a good reason, could have an enlightening side effect here: The AI allows a fresh, defamiliarized view of the text we seem to be familiar with and the content it conveys. We ask ourselves again whether this content still fits into our time and what these ancient texts can actually still tell us. The machine profanes and stirs up a healthy distrust in those who have left the level of AI-generated calendar poetry and clumsy manipulation. But this requires a healthy measure of skepticism, which not all people can muster.

//

 

First published in German on hpd.de