It is conceivable that AI-generated texts might someday take on a sanctified status
NEW YORK –
Our ancestors long feared the world-ending wrath of angry gods. But it is only recently that we have developed the capacity to do ourselves in, whether from climate change, nuclear weapons, artificial intelligence or synthetic biology.
Although our ability to cause harm on a planetary scale has increased exponentially as a result of our technology, our means of responsibly managing these newfound powers have not. This must change if humanity is to survive and thrive.
Today’s deeply interconnected world demands that we develop a collective consciousness and purpose to address common challenges and ensure that technological advances serve everyone. So far, zero-sum competition between countries and communities has posed an insurmountable obstacle to mitigating global risks. But the same technologies that are ripe for misuse also have the potential to help foster a shared sense of responsibility.
Technological development has shaped religious belief for thousands of years. Domesticating plants and animals made civilization — and thus all religions (other than animism) — possible, while the invention of writing, followed by parchment and later paper, helped these belief systems spread through holy books like the Torah, Bible, Quran and Bhagavad Gita. The success of Protestantism can be largely attributed to the printing press.
Now companies are building large-language-model chatbots — such as GitaGPT, Quran GPT and BibleChat — that people can use to receive automated personal advice inspired by traditional religious texts. Given that the Talmud, an interpretation of sacred texts, has itself become sacred in Judaism, it is not inconceivable that AI-generated interpretive texts might someday take on a sanctified status.
This suggests that while powerful technologies like AI can cause harm, they could just as easily have a positive influence on the continued evolution of social traditions and belief systems. Specifically, these technologies could help people incorporate a global consciousness and a greater awareness of how to meet the collective needs of society into their traditional identities. Following in the footsteps of animism, Buddhism and Unitarianism (movements that, to limited effect, have long tried to expand the concept of collective responsibility), AI systems could help supercharge these efforts in a hyper-connected world.
In 2016, AlphaGo, an algorithm developed by Google’s DeepMind, defeated Go grandmaster Lee Sedol by four games to one in a competition in Seoul. This astounding display of technological prowess underscores AI’s transformative potential. The success of AlphaGo, which had been trained on digitized games played by thousands of Go masters, was, in fact, a profound victory for humanity. It was as if all those human masters were sitting across the table from Lee, their combined wisdom channeled through the algorithm. On that day, a computer program, in many ways, represented the best of us.
Imagine if we instructed a future algorithm to study all of humankind’s recorded religious and secular traditions and to create a manifesto referencing the best of our cultural and spiritual achievements and devising a plan to improve upon them. Using all that it has learned, this bot might advise us on how to strike the optimal balance between our individual needs as members of smaller communities and our collective needs as humans sharing the planet. One could even imagine its output having the same legitimacy as the Talmud, or as the tablets our ancestors purportedly received atop mountains or dug up in their backyards.
Today many people regularly engage with generative AI bots, whether using the predictive text function in Gmail or querying ChatGPT. Computer systems in cars alert drivers when they veer into another lane, while those in planes warn pilots when they make an error. Soon, seamless natural-language interfaces will plan our vacations, write computer programs based on prompts from people who can’t otherwise code, suggest treatment options to doctors and recommend planting strategies to farmers. AI systems will end up playing an outsize role in many critical life decisions, so it will make sense for us to program a concern for the common good into them.
In the second game between AlphaGo and Lee, the algorithm made a move that human experts saw as a mistake. By existing metrics, the move had a one in 10,000 chance of being beneficial. But it turned out to be an optimal move that no human had previously considered. Instead of undermining human players, AlphaGo ultimately made them better by introducing new ways of playing the game. All of this can be credited to humans, who invented these technologies and Go itself. Humans also created the AlphaZero program, which defeated its predecessor, AlphaGo, after learning the game only by playing against itself.
Technology, in other words, is us, so it must be developed for us.
We should prompt the broader AI systems being trained on the cultural content that humans have generated over thousands of years to help us imagine a better path forward, as AlphaGo and AlphaZero did for Go players.
The clock is ticking to develop a global framework for addressing the dangers we are generating. It’s our move.
JT