Why does ChatGPT continue to produce figurative and metaphorical interpretations of Jesus’ teachings? Why is it so easy to convince the chatbot to flip its claims on something like Paul’s use of temple imagery? There are at least two possible reasons: First, ChatGPT has no account of its own training and the traditions informing these interpretations, and second, ChatGPT has no connection to lived experience or reality. As it confidently asserted when I first asked it, it has no “personal beliefs or values.”
Despite this, it vigorously pursues an interpretation when asked, privileging certain perspectives and sometimes outlawing or excluding others. It does so because the words are a statistical game, not Scripture to be lived. It is only parroting what it has been trained on—which is a body of texts that it cannot identify because it seemingly no longer knows what they are (if it ever knew, and if know is even the proper term).
This presents a two-fold problem for Christians who might seek out information about the Bible from ChatGPT. First, one cannot be certain of the sources of the perspectives offered by ChatGPT. Jesus asserts several times in Matthew that his true disciples may be known by the fruits evident in their lives (5:15–20; 12:33–37; 21:33–46). If one cannot access the life of the interpreter and thus the fruits it has produced, how might the Christian know whether the interpretation comes from a true disciple of Jesus?
Second, ChatGPT and other large language models are “black boxes,” meaning we do not know what is happening to generate the responses they provide. Both Christianity and Judaism have historically emphasized engaging with the past and present religious community and that community’s interpretations of sacred texts and traditions.
ChatGPT’s interpretations are ignorant of tradition, overly metaphorical, and individualistic.
Those are pitfalls for human interpreters, too. https://t.co/J8aR5ivRli
— Christianity Today (@CTmagazine) August 1, 2023