In What Makes Us Human, the artificial intelligence program GPT-3, along with Iain Thomas and Jasmine Wang, has written a fascinating book at the somewhat fluid intersection of self-help and wisdom literature. Given the capacity of computers today and the cleverness of their programmers, I don’t think we have to be surprised that such a book is possible; I will return to this point shortly. But I would also suggest that the book’s subtitle, An Artificial Intelligence Answers Life’s Biggest Questions, does not tell the whole story. It is not hard to answer the biggest questions. It is hard to provide good answers to the biggest questions. To evaluate the quality of the answers provided by GPT-3, it is useful to have a sense of where they came from.
GPT-3 is a large language model form of artificial intelligence, which means (more or less) that it consumes vast quantities of text data and derives from it multi-dimensional statistical associations among words. For instance, when asked to complete the phrase “Merry . . .” Gpt-3 replies “Merry Christmas,” not because it knows anything of Christmas or merriment, but simply because those words stand in a close statistical relationship in its database. When asked to complete the phrase without using the word Christmas, it comes up with “Merry holidays” and then “Merry festivities.” One of the key features of this kind of program is that its internal operations are almost entirely opaque; it is extremely difficult to determine how the program reaches the decisions that produce the outputs that it does.
What Makes Us Human was produced by a GPT-3 that had been “prompted” “with selected excerpts [elsewhere characterized as “a few select examples”] from major religious and philosophical texts that have formed the basis of human belief and philosophy, such as the Bible, the Torah [sic], the Tao Te Ching, Meditations by Marcus Aurelius, the Koran, the Egyptian Book of the Dead, Man’s Search for Meaning by Victor Frankl, the poetry of Rumi, the lyrics of Leonard Cohen, and more.” (A bibliography would have been useful.) Then the two humans would ask some big question (“What is love?” “What is true power?”) and subsequently ask the model to “elaborate or build on” “the most profound responses.” The book is the result of “continuing to ask questions after first prompting GPT-3 with a pattern of questions and answers based on and inspired by existing historical texts” representing “the amalgamation of some of mankind’s greatest philosophical and spiritual works.”
So when Thomas and Wang note that “We have done our best to edit everything as little as possible,” the disclaimer must be understood within this framework of iterative “engineering” of GPT-3 responses through their own sense of what is profound, a process that would only be undertaken by quite an intrusive . . . editor. In addition, they acknowledge two (more) significant editorial decisions. “In all instances, so as not to cause offense, we have replaced the various names for God with the words, ‘the Universe.’ Our goal is to unite around a common spiritual understanding of each other, and so while our decision may be divisive, we hope you understand the intention behind it.” As it turns out, Thomas and Wang do not entirely avoid mentioning God in their pursuit of unity through making a divisive decision.
Read it all.