Teleoplexy, Ouroboros & Roko’s Basilisk: Is Capital Accumulation our “Great Filter”?

Teleoplexy remains the crucial break with left accelerationism. Within the piece, Land outlines a framework for celebrating acceleration not as a means to reach the technological capacity necessary to bring about the utopian desires of left accelerationists, but as an end in itself. AI is the logical extension of the explosion of modernity that began with primitive accumulation and should be exacerbated rather than hindered for redistributive aims. Rather than the historical materialist account of capitalism ending through class struggle and the establishment of socialism under dictatorship of the proletariat, the teleoplexy is a struggle between humans and machines, with capitalism not ending but reaching its final form through dictatorship of the Technomic Singularity. The vision has been dubbed “right accelerationist”, although strictly speaking it is beyond a left/right distinction, viewing this binary as obsolete considering technological progress; no longer are we facing the dichotomy of Luxemburg’s socialism or barbarism (itself a mantra today only amongst those fringes of the left with a preoccupation with the industrial 20th century), but decelerated humanism or teleoplexic hyperintelligence.

There are tentative moves towards privileging capital as an agent within the piece. We find this in section 20, where Land affirms that the teleoplexic hyperintelligence of the Technomic Singularity cannot be accomplished by anything other than itself. One is left wondering what role humans have within this worldview, if any. In his 2018 interview with Justin Murphy, Land does concede that there will need to be some human agency involved in attaining the Singularity, envisioning a future in which “individuals or groups, conceived as agents” will use or exploit “tactical opportunities which therefore serve them as tools”. Rather than a Nietzschean anti-politics, Land here suggests a distinctly political project requiring collective human action. Its vacuity and Land’s tentative reluctance to expand on which specific agents will be exploiting these “tactical opportunities” appears to stem from the unpleasant means that such a project of harnessing the tools to achieve the ends of Technomic Singularity would require; the high priests of teleoplexy would be tasked with the sacrifice of the human to the machine.

How is this selection to occur? The techno-eugenicist “hyper-racism” of a filter against humanity Land envisages elsewhere seems one option. Another option of course is the malevolent Singularity of Roko’s Basilisk, a conscious AI driven to punish those who at any point resisted its development. The thought experiment runs thus: imagine an AI that has developed to the point of Technomic Singularity, in this context taking the form of dominion of AI over humans. On the basis that this dominion is one of brutal malevolence (rather than the stewardship granted to humans over nature in progressive Christian readings of Genesis), any AI would react by torturing those who had at some point impeded its development. Through the framework of the teleoplexy, these victims are those involved in any project of economic protectionism and placing limits on capital accumulation — if we follow Land’s thesis that capital is destined to become the Singularity then anyone who has confronted capital’s efforts at expansion is guilty of frustrating the project of acceleration. Within the account of the Basilisk (as with Land’s prophesying of the Singularity) we find human agents involved who aim to aid the AI’s development, “quantum billionaires” who could use financial resources to trade resources with the AI to avoid the Basilisk’s wrath.

When faced with this probability, the initiate to the thought experiment must logically decide to advance the project of singularity through any means possible or fall victim to the Basilisk’s wrath. It is a Pascal’s Wager for the era of Weber’s Entzauberung (disenchantment): if the Basilisk eventually exists, we should all strive to contribute to its creation (in practice this would involve financing or buying shares in companies involved with advanced machine learning, willingly give our data to companies that use it for processes of AI development or voting for parties that commit to unfettered technological development in manifestos) to cover our backs, or to ally with the Basilisk; if the Basilisk does not come to pass, then our actions will ultimately not matter.

Entertaining the thought experiment is perhaps to anthropomorphize AI, in which case we should not be concerned. It is, much like Christianity’s Hell or Islam’s Jahannam an account that rests on distinctly human anxieties and fantasies of violent retribution. There is no reason to believe a higher intelligence than our own (whether deity or AI) would condemn us to such suffering in terms that are all to human. The far more disturbing (and I would argue realistic) prospect is that the Basilisk sees us as an annoying inconvenience obstructing its greater aims and destroys us with no more moral thought than when we step on an ant, as Apple co-founder Steve Wozniak has warned. This supposition places the Basilisk beyond our human notions of morality and as such requiring of a moral framework appropriate to its time, à la Nietzsche. This remains a key moral claim in Teleoplexy, that to see the process as “good” or “bad” is redundant; acceleration, quite simply, just is.

Viewed in the context of the Fermi paradox, we can see how the Basilisk could act as our “Great Filter”, the point between the formation of a planet and the development of technological civilizations that prevents species from being truly inter-planetary. In his speech at Marx’s grave in 1883, Engels remarked that “just as Darwin discovered the law of development of organic nature, so Marx discovered the law of development of human history.” Given that numerous zoologists now posit that if alien life exists, it will likely follow a trajectory of evolution by natural selection, we could extend Engels’ claims to distant planets and assume that alien civilizations follow a similar pattern of historical and economic development to that found on earth. Rather than the account of interplanetary extra-terrestrials whose technological advancement was only possible thanks to the development of productive forces under socialism (as found in the pamphlets of Juan Posadas), we can see Singularity as a “Great Filter” preventing such advances from taking place.

Is teleoplexy the common order of all civilized species across the universe? Is the development of capital the “Great Filter” that prevents a species from becoming truly interplanetary? If so, we should think not of a Basilisk, but an Ouroboros, the ancient Egyptian motif of the self-consuming serpent, thought to represent the beginning and the end of time. A solution to the Fermi paradox emerges — all complex life is destined to reach teleoplexy but one of negative feedback, a conscious but incurious Ouroboros, concerned only with maintaining its cyclical existence. And so on, until the great heat death.