Sinners in the Hands of an Angry Basilisk: Some early thoughts on the Political Theology of AI 

Dulle Griet
by Pieter Bruegel the Elder

O sinner! Consider the fearful danger you are in: it is a great furnace of wrath, a wide and bottomless pit, full of the fire of wrath, that you are held over in the hand of that God, whose wrath is provoked and incensed as much against you, as against many of the damned in hell. You hang by a slender thread, with the flames of divine wrath flashing about it, and ready every moment to singe it, and burn it asunder – “Sinners in the Hands of An Angry God,” A Sermon Preached at Enfield, July 8th, 1741 by the Rev Jonathan Edwards 

Over the course of walking to and from work, I’ve been listening to the audiobook of Karen Hao’s Empire of AI. It is a fluently told journalistic history of the first generation of AI development and commercialization and invaluable background on the various business dealings between the big players in that world from between 2019 to around 2023. It focuses predominately on OpenAI and particularly Sam Altman. Yet it isn’t just a recounting of the Bay Area tech scene but an impeccably researched exploration of the inherent extractivism, colonialism and material domination of the sector which now, by and large underpins the US economy. It’s taken longer than I’d like to get through the book — this isn’t because the book is badly written, in fact quite the opposite. Rather, it’s full of long and carefully sourced quotes from all the movement’s leading lights and I physically cannot listen to the words these people allowed a journalist to record without wanting to scream until I throw up.  

Look, I’ve made it pretty clear that I’m not a fan of AI but I think it’s a little too easy to allow oneself to remain at a kind of instinctive, knee-jerk rejection of this shit. With that in mind, Hao’s book is invaluable in giving some background to the exact how and why of our current conjuncture. What’s really striking about the book, particularly its early segments, is the kind of immateriality of AI discourse. What it is isn’t super clear, even to the very technically gifted people involved and so, as a result, artificial intelligence is posited as a kind of speculative sigil of possibility. AI is not what is, but merely what may be — in other words, the very founding of OpenAI by Sam Altman, Elon Musk and others, is an act of faith. It seems to me a that what we’re dealing with is a kind of political theology.

Political theology is a pretty contested field of discourse, but I like Adam Kotsko’s definition — it’s not necessarily about saying that politics is theological (because of course it is) or about saying that the theological is political (because again, of course it is) but rather that there are synchronic parallels between the two fields and the diachronic processes by which concepts migrate between the two. 

There’s an extremely revealing moment towards the beginning of the book, where Hao is actually invited to OpenAI’s office and sits down with Greg Brockman, the former CTO and current president of the organization. Brockman comes off as excitable — a true believer in the mission. One of the most interesting moments is when Hao asks a pretty reasonable question: what is this stuff supposed to do? Brockman responds that what OpenAI is committed to is artificial general intelligence (or AGI) – a technological breakthrough that could solve the really big problems. For example? Climate change. But the problem with Brockman’s claim is that solutions or mitigations to the urgent problem of climate change already exist. It would be things like a rapid decarbonization of the global economy, massive investment in renewable energy and a pivot in the world economy towards rewilding. The thing that Brockman doesn’t admit — in fact, can’t admit is that all of these — and many other ideas — don’t depend on technological solutions, they depend upon transformations in the political and economic structures which are at present of direct benefit to people like him. This of course is completely inadmissible by Brockman, so the techno-soteriology has to step in: the nebulous promise of AGI. Or, as the  commonly accepted definition among AI acolytes seems to go, an intelligence that would exceed human cognition in most “economically valuable tasks”  

Hao’s chapters on the data-labeling sweatshops in Kenya and the lithium mines feeding the GPU boom make the extractivism undeniable. Yet, even as she catalogs the human and ecological toll, the executives she interviews like Brockman keep lapsing into eschatological language: “We are building the future of humanity,” “This is our civilizing mission.” The materiality documented so clearly — the colossal energy draw, the outsourced and exploited cognitive labor, the colonial supply chains — only serves to throw the immaterial promise into sharper relief. For the AI world, the machine God is always just over the horizon and so the present suffering is not exploitation but an economic investment in the coming kingdom. 

Something that Hao’s book makes very clear is that it isn’t just that AI tech bros have a particular set of political or ideological commitments. Rather, when you really listen to these people there’s a panicked political deferral that runs through so much AI boosterism. What these people want (influenced by the anti-democratic reactionary philosophy of Thiel and others) is a world in which politics itself is no longer thinkable at all. Thiel, of course, famously thinks that democracy and freedom are completely incompatible and to push things a little further, when taken with the broader tech-disdain towards the people who will be “left behind” it’s clear that the AI crowd seem to see the world as full of others people who are not really people at all. This is one of the clearest theological and political homologies at work: two of the key pillars of Calvinism. Firstly, unconditional election for the chosen few, and limited atonement for those to be left behind, as the elect build a new world. 

The founders of the AI movement have a particular eschatology, a distinct teleology in mind for the world, and a clear mechanism of salvation. Eschatologically speaking AGI represents the concretization of the end of history and the utter triumph of capitalist realism wherein every private thought, every expression of interiority is outsourced, commodified and sold back to us. In terms of the teleology, there seems to be some debate between the apocalypticists like the ludicrous fanfic writer Eliezer Yudkowsky and the optimists (a position put across most clearly by someone like Ray Kurzweil). The mechanism of salvation is also two fold — on the one hand, there is the beneficent machine God created and “aligned” in the image of people like Sam Altman. The flip side is that these people have collectively invented the idea of hell with extra steps. After inventing their beneficent deity through the ontological argument (AGI as thinkable) and the teleological argument (conditioned by Moore’s Law, in its “hyper” version it becomes a historical inevitability), it becomes necessary to subordinate all work to bringing it into being. As Marc Andreesen writes: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” All are called, few are chosen in this theology and not only must they work for its actualization, to not do so is to suffer the risks of being tortured by Roko’s Basilisk. Or, as the old Calvinists would put it: the saints must endure. 

Something that I don’t think I appreciated enough is the degree to which this kind of technological Calvinism produces a particular kind of psychological effect. Limited atonement with unconditional election is a recipe for panic. Calvinist theology is a horror story, (read Hogg’s Memoirs and Confessions to see this in action). For the true believer, the world as it is melts in the righteous fire of God’s judgement. You cannot choose to be saved — grace is outside one’s control and the debt owed is so great that there is no way you can ever work to pay it off. The twist of course, is that the contemporary political theology of AI makes work a structural necessity, leaving the individual caught between the limited atonement of theology on one side, and the inescapable debt of capitalist obligation on the other. AI is a theology of debt made inescapable. As Sam Altman put it recently “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” No other terms suits this political theology than Satanic, anti-utopian in the most nightmarish terms. 

My friend and comrade, the philosopher Adam C. Jones, wrote an excellent piece about this a while back, arguing that 

‘AI’ is a cult purely because capitalism is a cult, and that ‘AI’ names nothing technological in the sense of a new machine, but rather is the ideological shine of capitalism which has at last appeared in the age of omnipresent simulation

For Jones, the argument to be made here is through Benjamin’s fragment on capitalism as a religion. For Benjamin, capitalism is cultic – things only have meaning in relation to the cult. AI then is the particular expression of a broader theo-political problem, a cultic intensification of an already existing theology – America’s capitalist Calvinism. Benjamin insisted the only response to a cult that offers no atonement is to expose its roots in surplus-value extraction. Jones closes his excellent piece by urging us to “illuminate and profane” the satanic roots of AI’s mysteries. The task is not simply to debunk the technology (it is real enough) but to mount a reformation that strips away the theological aura that lets a handful of labs extract the planet’s resources and labor while promising salvation. 

Leave a comment