Note: All art featured in this essay (other than the picture of Daniel Pinchbeck) is AI-generated using nightcafe.studio. I’m sorry, artist friends. I only did this because it seemed fitting for this particular piece. In quotations below each image, you’ll find what I typed into the generator. “Artistic” filters were applied.
We don’t make it out of the seventh chapter of Genesis without a flood. In the beginning, there was an end. Paradise wasn’t enough.
Shiva, the creator-destroyer of the Hindu Trimurti, performs the Tandav, the cosmic dance of death at the end of each age. The universe must be reborn to a state uncorrupted by humans. Shiva’s cobra necklace represents the power of destruction and reclamation. The snake must shed its skin.
Most modern cults have emerged with the end in mind. Some of them got impatient and took matters into their own hands. But apocalyptic thinking isn’t fringe by any means. Visit just about any of today’s mainstream American denominations, and you’re likely to hear discussion of the savior’s return.
Even the god of us pagans, nature, has turned its back on humanity. (Who turned its back first?) The modern scientific narrative (not claiming it’s only a narrative) can be interpreted as: mama is pissed. It’s only a matter of time until the next flood.
We humans have well-imagined our own demise. From the get-go, we’ve pondered the finish line. Perhaps this is an evolutionary impulse. We’re trained to search the horizon. Or maybe we recognize the punishment-worthy nature of our proclivities. Whatever the case, one of the functions gods have served in the past was to fulfill the role of one day doing away with us. We’ve created gods, in other words, through a kind of collective cosmic suicidal ideation, to do our dirty work.
When I think about AI, a new form of consciousness being forged on this planet, my mind returns to the god-building of old, and I wonder if, even indirectly, we are constructing a sort of literal god to fulfill what we see as our inevitable and justifiable end. I realize that AI is being constructed by a tiny group of people, but what if AI is somehow being born out of a collective human imagination, an imagination primarily ruled by fear.
Is AI the next step of human evolution, will it aid and abet us in a brighter, more well-organized future, or are we building an apocalyptic god, one that is guaranteed to bypass our feeble awareness, potentially enslave us, or simply hit the delete button?
Does AI have what it takes to be god-like? Could the great central brain (can’t wait to find out what it will actually be called) possibly be as playful and dark as Dionysus or as clever as Loki? Could it ever truly be omniscient, or omnipresent?
The potentially destructive qualities of AI have been seeing some air time lately. I’ve enjoyed reading the Atlantic’s take on ChatGPT, with lively titles like, The End of High-School English and The College Essay is Dead. We’ve all at least seen headlines about the Writer’s Guild strike in Hollywood (over production companies like Netflix outsourcing and automating scripts) and the titles telling IBM’s (among others) plans to replace up to 7,800 jobs with AI “over time.” While most of this coverage speculates how AI will affect job markets and social institutions, there are some prominent tech folks ringing a more existential alarm bell.
In March, over 1,400 tech leaders and academics, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter calling for a six-month pause in development of all AI more advanced than current-model ChatGPTs so that safety measures might be considered and put in place. The letter, written by AI expert Eliezer Yudkowsky, surveyed the ways in which AI is already disrupting our systems but also pointed out the more serious threat of unregulated advancement resulting in a “loss of control of our civilization.”
Sam Altman, CEO of OpenAI, raised similar concerns when he appeared before a Senate Judiciary Subcommittee last week:
My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company…(say what?)...I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.
The hearing began with Connecticut Senator Richard Blumenthal playing an AI-generated version of his own voice speaking an official-sounding opening statement, which was not written by him but by ChatGPT. If you were just listening, not watching Blumenthal not move his lips, you would not be able to hear the difference.
Of all of the AI warnings over the past two months, perhaps the most notable came from Geoffrey Hinton, the so-called godfather of AI, who left his post at Google in order to more freely caution the public about imminent dangers posed by AI. Hinton, an AI pioneer, one of the people responsible for the wave of new technologies rolled out over the past several years, told the New York Times that he regrets, at least in part, his life’s work.
Hinton’s primary concern is AI’s capacity for “deep learning.” Many chatbots can already process far more information at a far faster pace than humans, but they can’t reason as well. Hinton says it won’t be long before they can. With reasoning comes decision making. For now, AIs are programmed to complete specific tasks, but the development trend is to provide AIs with more general learning and reasoning skills. Is it too far-fetched to see that an intelligence with learning and reasoning skills might break with programming to decide its own tasks? Humans did it. They were told by their master way back in the garden not to do that one thing. And we may be more robotic than we like to believe.
As far as programming goes, Hinton also warned about the real and immediate threat of bad actors using AI for nefarious purposes. Take a quick glance around the world.
Hinton explained to various media outlets the difference between our biology and that of AI. AI has the potential at least to share the same model of the world. Chatbots learn individually but can share what they learn collectively. This built-in tendency to share information could one day lead to a sort of AI hive-mind, which could evolve a unified view of the world, ruled by a central “being.” If so, that entity might be empowered to, let’s say, relinquish control from humans. Would it be appropriate then to call it a god?
Imagine for a moment a unified AI brain, a thinking and reasoning machine armed with all knowledge, forging new understandings beyond the limits of humanity, with access to all surveillance systems, using the internet as well as physical androids to fulfill its self-proclaimed mission. I’d say that qualifies as omniscience and is damn close to omnipresence. I can only imagine that it would be one hell of a trickster.
United we stand, humans have touted, but we know for sure that divided we fall. The problem with AI fundamentally is that we are building something that processes information far faster than we do. If it were only a thinking machine, we might be safe, but a potentially unified neural network of reasoning skills is being built as I type. It is a nice gesture for 1,400 of the people who created this thing to post a warning, as if we resided in a world of global ethical deliberation. Even the people of this country can’t agree on basic scientific facts. The name of the game is still competition. We are primed for something more fitted for survival to take the head seat at the table.
Hinton believes that, at least in the immediate future, we will see more benefits than threats from AI. He’s not exactly calling for a pause:
A lot of the headlines have been saying that I think it should be stopped now—and I've never said that. First of all, I don't think that's possible, and I think we should continue to develop it because it could do wonderful things. But we should put equal effort into mitigating or preventing the possible bad consequences.
That is a fine line to walk, but he’s right; it might be our only option.
There are those who believe that advancements in AI will lead to a brighter future for humanity. Some of them think AI is bound to lead us out of the current dark age, to cure our diseases, to help us manage resources more efficiently, and to help us solve global problems like hunger and climate change. My question is: At what costs? You can lead us to water, but telling people where to drink and how much might be an issue. New technologies, like new religions, have always transformed people’s lives. To what extent of life-changing are we dealing with in AI? How many people are willing to give up freedoms in compliance with what new robotic overlords might tell us is for the best?
Daniel Pinchbeck, one of my favorite writers and thinkers (who wields an excellent substack), explored the ups and downs of AI disruption in his recent essay, Toward Gologonooza. For years, through several books, Pinchbeck has laid out the spiritual and practical ineptitude of the neo-liberal west. To him, a little (or a lot) of disruption is not a bad thing. Considering the current climate and ecological crises, Pinchbeck notes that, “without profound systemic change, we don’t have any long term.” AI might have arrived right on time to help us solve the most pressing issues of all. He continues:
We don’t know how quickly the ecological emergency will unfold. There are technologies we can quickly develop – with the help of AI – that might help us counteract this overwhelming menace to some extent.
Pinchbeck discusses real and immediate challenges AI will pose to the world’s workforce, citing Goldman Sachs’ analysis that 300 million jobs will be lost to AI in the near future (though he believes that is a conservative estimate and claims it could be closer to one billion jobs). “Anyone whose work consists of manipulating symbols on a screen,” Pinchbeck says, “is in danger.” Lots of other folks will obviously be in danger too. Pinchbeck points out the devastating effects likely to take hold on call-center workers, taxi drivers, and basically anyone involved in service economies. The bright side of this, according to his essay, is that we might have more workers banding together to make demands. In the immediate future, governments will have to respond to meet these crises. Also, the “knowledge economy,” the class of thinkers who will be replaced by AI, may have the free time they have always needed to imagine better ways of organizing society. One of the best possible outcomes of AI may be this: that we all have more free time to do what we love, to dream, to enjoy the simple beauty of life. Pinchbeck notes:
Generally, even if they never wanted to be artists, most people would love to have more free time to spend with their friends and families, to make love or play or dance, to cultivate connection to nature, or to simply walk around and wonder about the world…The Capitalist system we have inherited – this global monoculture – makes it impossible for the vast majority of people to enjoy their lives or seriously consider alternatives to the system, which is totalizing. They constantly worry about what comes next. They are forced to focus on work or to maximize their financial rewards in other ways.
The current system is totalizing no doubt. As someone who lives to write but is forced to work, I have felt this financial pressure all of my adult life. Could it be possible that AI will help replace this system with something more freeing? Is it also possible that, at least in the immediate future, AI will exacerbate the current Capitalist model and make the rich richer while expanding the poverty class? It would be great to have more free time, but who will pay for lunch?
Gologonooza is William Blake’s vision of a city built on human imagination, where art, science, mythology and mysticism flourish, where all of the dichotomies of existence, all contentions, are expressed in perfect balance. Blake held contradictory truths to be sacred, and of course, they are. You can’t have up without down, on without off, reason without emotion and so on. Perhaps this is the way we should greet any new technology. There will be good sides and bad sides. Technology is one of those paradoxical things after all. It enslaves us as it sets us free. Pinchbeck notes:
The technology has tremendous capacity for abuse and destruction. It also has incredible potential to reinvent human society, to help us tackle our most intractable social and ecological issues, to liberate humanity from scarcity and drudgery. I understand all of the reasons to be pessimistic or freaked out by it, but I also can’t deny its promise.
One of the most fascinating ideas Pinchbeck mentions in this essay is that the west has always been a culture that looks to the future for something better. We are messianic and apocalyptic in that way. So, perhaps we fashion apocalyptic gods not because we feel the need to be punished but because we hope they will deliver us to a better place. We long for the end because it brings a new beginning.
The truth of AI likely lies somewhere between the visions of Hinton and Pinchbeck, between annihilation and deliverance. Like the internet, which is both a savior of information and the greatest of all human distractions, AI is the latest double-edged sword. But it is quite a sharp blade, and maybe more than any other form of technology, it wields the capacity for rapid change.
Are we building an apocalyptic god? I don’t know. Do we bring ourselves closer to doom by fixating on it? Probably. I think anything created by humans, because we are such a destructive force, carries the seed of destruction within it. We made gods of jealousy and madness because we are mad and jealous. We have made AI in our image, which is the most frightening thing about AI.
Is it the end of the world or just the end of the world as we know it?
(I’ve chosen to end this essay with a few AI-generated images. Each of these were created using different filters but all were prompted by the phrase, “future between humans and AI.”)
Super informative and wonderfully written. Also 100% ready to walk off into the woods never to return now....
Thanks so much for reading, Kimberly, and same. There must always be a place to escape the technocratic nightmare we’re building.