Transhumanism and the risk of stable totalitarianism
Transhumanism seems to be a new religion, whose main goal (or ‘myth’) is for humanity to take control of its own evolution by merging with technology. Our analyst Timo Pieters explores how these ‘transhuman’ technologies could increase the risk of stable totalitarian regimes in the future.
The Great Reset and transhumanism
In a previous EARS article, I have explored the World Economic Forum’s (WEF) plan for our post-COVID future, called the ‘Great Reset,’ and how it promotes ‘transhuman’ values and technologies. Transhumanism has many religious characteristics, and some see it as a new religion in its own right. It is based on the idea that human beings should take charge of their own evolution by merging with technology, such as brain chips and genetic engineering. The goal is to ‘transcend’ our physical and mental limitations. The WEF mostly emphasises the positive, utopian side of such technologies, but these technologies clearly also have the potential to bring about nightmares, such as a totalitarian regime that becomes impossible to overthrow.
Effective altruism and existential risks
There is another philosophical movement, Effective Altruism (EA), that is actively trying to prevent such outcomes, which EA researchers call ‘existential risks.’ These are risks that could either completely destroy sentient life on Earth, or at least block its potential for future flourishing and well-being. Such scenarios are often the subject of disaster movies, such as nuclear wars, engineered pandemics, or Artificial Intelligence turning against humans. The problem is that these risks are all too real, that they are some of the worst things that could happen to us, and that most of them would arise due to human activity. In fact, one of the main researchers in the field, Nick Bostrom, argues that the biggest risks would be caused by future technological breakthroughs that will give us unprecedented power over the material world and our own biology. Unfortunately, these are exactly the sorts of transhuman technologies that the WEF is promoting with its ‘Great Reset’ plan. The WEF says it aims to use these technologies to “support the public good.” However, this does not necessarily mean that they will have no adverse, unintended consequences in the long run, such as enabling a tyrannical regime to gain and maintain power like never before.
The long-term risk of stable totalitarianism
If we look at the history of totalitarian regimes, we can see that they were some of the worst things that have ever happened. In the 20th century alone, three totalitarian states (Nazi Germany, Soviet Union, and Maoist China) managed to kill 100 million civilians in cold blood. The only fortunate aspect of such regimes is that they tend not to last very long. One can then understand why a global, ‘stable’ totalitarianism – one that would last 1,000 years or more – would be an absolute nightmare. If there is any chance of this happening, at some point in the long-term future, we should do everything we can to prevent this. Luckily, there are serious researchers working on how to do this.
The five ingredients of totalitarian regimes
According to historian Richard Pipes, totalitarian regimes need five ingredients to function: (1) an all-embracing ideology, (2) a single party, headed by a ‘leader,’ which dominates the state, (3) police terror, (4) the ruling party’s control of the means of communication and the military, and (5) central command of the economy. Economist Bryan Caplan explains that all five “naturally flow from the goal of remaking human nature.” Interestingly, this is also the goal of transhumanism. It is therefore no surprise that transhuman technologies can facilitate these five ingredients.
How transhuman technologies can facilitate the five ingredients
Concerning the first ingredient, brain chips and scans could be used to identify dissidents, and pharmaceuticals could be created to suppress skepticism. The state could even decide to take charge of reproduction itself, and selectively breed and raise children in ‘state orphanages’, to ensure conformity. According to Caplan, the main political ideology to avoid is that we need a ‘world government’ to protect us against other existential dangers like terrorism, environmental catastrophe, or pandemics. Concerning the second, life extension technology could be used to keep the ruling party and its leader alive indefinitely, which would eliminate the vulnerable moment when the leader gets replaced. Concerning the third, the risk of police terror and military control could be heightened by autonomous weapons, such as AI-powered drones and nano-bots, and surveillance tech, such as lie detection based on facial expressions. Such innovations could also be exploited by the discovering nation to dominate the rest of the world, or its own population. Concerning the fourth, some control over the means of communication (including news) is already taking place, but mostly by Big Tech companies (corporate-states) and government censorship. This has been especially strong during the COVID-19 crisis. New AI innovations like ‘deepfakes’, bot armies, and spyware could make censorship and propaganda even more effective than it already is. A totalitarian state with complete control over the information space could even rewrite history, like in George Orwell’s novel 1984. The risk of the fifth ingredient could be worsened by making “citizens dependent on the state for their livelihood,” such as by replacing most human labour with intelligent machines, and providing a universal basic income. Further digitalising our monetary system could also enable the state to monitor and control all transactions, and therefore disable the finances of anyone critical of the regime. What all this shows is that transhuman technologies make it much easier for a stable totalitarianism to emerge, and to ruin millions of lives in the process. Transhuman technologies could therefore be seen as existential risks in themselves.
Transhumanism: a dangerous religion?
Now let us consider Yuval Noah Harari’s hypothesis that transhumanism will become the dominant religion of humanity. The central aim (or ‘myth’) of this religion seems to be that humans should take charge of their own evolution by merging with technology. However, if the technologies we merge with increase the risk of a hell on earth (‘stable totalitarianism’), then we should ask ourselves whether mass conversion to this religion would present an existential risk in itself. If our transhuman values and goals cause us to produce technologies that could ruin millions of lives, then perhaps there is something wrong with those values and goals. After all, the only reason to create a new technology (such as brain chips) is because it can do something we find valuable, meaning that technology is not as value neutral as we like to think. The ultimate question, then, is whether we want to promote transhuman goals and values. In my next weekly comment article, I will further explore these values, and whether they are worth striving for.
This article was written by Timo Pieters and reflects his personal analysis and opinion, rather than those of EARS.
Interested in similar topics? Go to our Dashboard and receive free updates.
 Transhumanism | Definition, Origins, Characteristics, & Facts. Yuval Noah Harari’s book Homo Deus puts forward the thesis that transhumanism is a new religion that will likely become the new dominant religion of humanity.
 Now is the time for a ‘great reset’: “The third and final priority of a Great Reset agenda is to harness the innovations of the Fourth Industrial Revolution to support the public good, especially by addressing health and social challenges.”
 Caplan, Bryan. “The Totalitarian Threat,” in Global Catastrophic Risks, ed. Nick Bostrom (New York: Oxford University Press, 2008), 504-520. p. 504.
Contrary to popular impression, Maoist China was the worst of the three, killing approximately 65 million civilians.
 Ibid, p. 509. This definition of ‘stable totalitarianism’ is put forward by Bryan Caplan and others.
 Ibid. Transhumanism advocates changing human nature by merging human beings with technology. This would result in new, post-human beings, which Noah Harari calls homo deus, as opposed to homo sapiens.
 Ibid, p. 511. Genetic engineering could also be used to eradicate ‘anti-party genes’ from the population. The idea of letting the state raise children away from their parents can already be found in Plato’s Republic, and has also been explored in the novel Brave New World by Aldous Huxley.
 Caplan, “The Totalitarian Threat.” This is because one of the weak points of a totalitarian state is the existence of neighbouring, non-totalitarian states. In a world government this no longer exists.
 Ibid. The passing away of the leader(s) is usually the time when a totalitarian weakens or collapses, so if he stays alive for very long the system would be much more stable over time.
 Caplan, “The Totalitarian Threat,” p. 512.
 Deibert, Ronald. Access Denied : The Practice and Policy of Global Internet Filtering. The Information Revolution and Global Politics. Cambridge, Mass.: MIT Press, 2008. U.S. Accuses Google of Illegally Protecting Monopoly.
De Wit, Bob. Society 4.0 : Resolving eight key issues to build a citizens society. Alphen aan de Rijn: Vakmedianet BV, 2021. p. 14.
 COVID‐19 and misinformation: Is censorship of social media a remedy to the spread of medical misinformation?: EMBO reports: Vol 21, No 11. The Coronavirus Has Started a Censorship Pandemic in Turkey, Hungary, Thailand, and Around the World. Scientists who have been censored include John Ioannidis, Mike Yeadon, Knut Wittkowski, and others.
 In the novel, Orwell explores his vision of a future totalitarian state with continuous warfare, mass surveillance, ‘thought police,’ and pervasive propaganda. Many regard it as one of the best works of the 20th century, because it captured the essence of the totalitarian mindset so well.
 Caplan, “The Totalitarian Threat,” p. 505.
 AI is a perfect example, because it has to be programmed (instilled with values) by humans, which often creates biases: Artificial Intelligence Has a Problem With Gender and Racial Bias.