48. Who will take the danger of robots and artificial intelligence serious?

The political and societal reaction to robots and artificial intelligence is developing far slower than the technology. This essay explores the debate and asks why so many people seem unable to take this extinction level danger serious.

Much of the recent public discussion about the dangers of Artificial Intelligence has been laughably naïve, not because ‘critics’ overestimate the dangers machines pose, but because of what they claim to be afraid of. Much space is dedicated to discussions about privacy, racial bias, fact checking and other safe liberal topics. These are token debates that essentially ask for a more woke or regulated A.I., and function as a smoke screen for the central issues: the danger of mass social dislocation and replacement, and the threat of rendering humans irrelevant.

    I have before written at length about both the specific dangers of robotization for the labor market and how the cultural industry constantly misdirects our concerns regarding robots away from socio-economic problems. Recent developments in artificial intelligence have revitalized this debate, with language based applications such as chat GTP that can understand and write almost anything rendering the true power of A.I. visible to the general public. Yet no matter how dire things get, there is always an army of apologists (mostly journalists and professors) ready to calm down public concerns and make fun of techno-pessimists like myself. Since the debate has dramatically shifted in favor of my position, in this essay I will focus on the discussion itself rather than on the abilities of machines or the magnitude of the risk. I will argue that the inability to take robotization serious largely stems from the fundamental failure to consider the capitalist context in which these developments take place. Many optimistic arguments and dreams seem to take place in an entirely different world, and it is time to reclaim the mantle of realism.

In addition to studying and participating in the academic and media debate, I have been discussing robotization since 2018 with every person willing to discuss it, including dozens of engineers, company executives, software developers, etc. I will discuss the three most common arguments:

1. “There will always be enough jobs for humans to do”: this argument is usually used as a last line of defense, but serves as a good opening because of how fast it aged. When asked what these jobs are, people usually either could not answer or named artists. Besides that fact that the latter are not exactly a strong economic group, Artificial Intelligence has been able to draw pictures, compose music and write fiction for years now. Cook is another popular answer, ironically one of the job categories with the highest estimated risk of automation. A variant of this argument is that we should ‘join the software’ and all become coders. This view was repeated by then presidential candidate Joe Biden in 2019, when he advised people in endangered sectors to learn how to code. Three years later, when in winter 2022 it became evident that chat GTP could write code, magically hundreds of thousands of IT workers started losing their jobs. What makes similar Artificial Intelligence so disturbing is that while many previous robots replaced manual labor, it are exactly the creative tasks that are now under attack. The core point to understand is that robots are – by definition – developed to replicate human capacities (talking, building, analyzing, etc.) with a degree of autonomy, with the explicit intent to replace those humans. While we can list capacities that machines can yet not perform, this list will keep shrinking because developers have this exact list in front of them. We are a hunted species.

2. “Humanity survived previous industrial revolutions, so why fear now”: probably the strongest argument of skeptics has been that historically people have feared machines, but in the 20th century humanity found ways to keep employment at non-catastrophic levels. This argument has always been troubled, not only because millions of people did suffer during previous technological revolutions, but also because that survival required massive government interventions and policy attention. Capitalism didn’t rebalance itself, governments and public education did. Today the bigger problem is that the current technological developments are not comparable to previous ones, and outpaced even recent estimates both in ability and speed. Previous technological changes (such as the internet and computers) allowed for whole new economic sectors such as videogames or web design, while the current wave of robotization is mainly focused on automatizing existing processes. The full weight of these changes is currently not visible everywhere due to various other dynamics and the time it takes to implement them in real markets. In the West, demographic evolutions such as the retirement of the massive baby boomer generation will buy us precious time on the labor market. But long term this is a one direction street: once replaced, jobs are hard to bring back unless consumers somehow prefer human made products.     

3. “Robots and Artificial Intelligence will allow us to focus on more productive things”: there are many variants of this argument, but in essence they all resolve around either: a) perceived benefits of A.I. for unrelated social causes such as education or marginalized groups; b) the idea that work will become easier. Concrete examples of these things actually happening on a large scale are usually sparse, and for good reasons. There is a fundamental difference between the theoretical (abstract) applications of technology, and the plausible application of technology within a concrete economic or political structure. Different societies create different paths of technological development. Many of the optimistic views seem to subconsciously assume some sort of ideal communist state, where those who own the robots take into account the needs of the general population. In reality, intelligent machines are developed not to help poor children but to cut labor costs and increase productivity and control for enterprises – why would we assume that companies would fail in this objective? Imagine Wal-Mart investing millions in reducing their workforce and somehow ending up employing more, on top of helping indigenous children with their homework as an unforeseen side effect. Our work never truly gets easier; we just cut people out who are soon forgotten and (over)burden the remaining employees with whatever tasks remain. You know this, dear reader, because you have seen it happen in your own work, so why deceive yourself that the outcome would be different?

While the debate used to resolve around optimistic/pessimistic estimates about the ability of machines, today the risks are clearer and skeptics have to fall back on wishful thinking and euphemisms. Large sectors of society (including most politicians) still seem unable or unwilling to confront the issue and look the danger in the face. In recent years I have suggested various possible steps we could take to protect labor markets, from strengthening unemployment insurance to reducing the workweek or strengthening cooperative entrepreneurship. Many of these proposals are essentially socialist measures (pensions, subsidized employment, etc.) that are recalibrated for the occasion. But when it comes to the broader existential issue of replacing human abilities and creativity, there are only two paths: to contain the development of robots, or to create the political and economic system in which our optimistic dreams are less naïve.   

    To successfully contain robots and A.I., we must develop a deeper understanding of how the replacement of humans by machines functions in our system. For example, in the case of manufacturing robots, if we take into account the development costs, the machines are not necessarily always cheaper than hiring more flexible human workers. Yet the fact that machines as capital investments are tax deductible in many countries helps to explain why companies strive to implement them. Technical changes in fiscal or industrial policy to make robots less attractive (or human more attractive) should be accompanied with more direct control over technological progress. Here I am not just referring to ‘check and balances’ by experts or ethics councils, but actual democratic control by the people. Forget the ‘concerns’ of billionaires or A.I. researchers over the technologies that they themselves developed, their wisdom is the wisdom of the addict who figured out he is addicted. What humanity needs to protect itself is direct democratic control over the legality of certain innovations; for example by letting the population vote if they want self-driving cars.

    The only other path is to challenge the capitalist system that is currently driving these changes. If A.I. is to be used for the collective benefit of humanity (something I personally don’t believe in), this can only happen within a system where the driving logic is collective benefit. Translated for the tech bros, the ‘algorithm’ cannot be profit for competing elites. In such as system, the decision on the uses of machines cannot rest with the entities that own or develop them. Survival would no longer have to fully depend on the ability to sign a contract with private employers that might or might not need you. Workers would need to have a voice in what technologies are implemented in their jobs, and via the state the general population should be able to have a say in what challenges A.I. is applied to. In other words, it requires the socialization of the means of production, or as some would call it: communism. If that word scares you, I suggest that you go back to the previous paragraph and help us think about how our current society could save itself.  

I have only touched upon a few of the reasons why we fail to take robotization serious. Next to misunderstanding these evolutions outside of their capitalist context, many of the world’s most powerful companies have an interest in optimism. Furthermore, a strong impulse to appear hip and modern might explain the behavior of some people, while the simple fear for facing the consequences might make others burry their head in the sand. In the broadest sense, this topic connects to a more fundamental problem that also plagues debates on climate change: do we still have the ability to take anything serious? Remember that for many the first big exposure to A.I. photography (read: limitless fake news) were funny pictures of the pope. We might have to flip Karl Marx’s famous saying on its head: first as farce, then as tragedy.