The infrastructure of meaninglessness
Listen, there are two realities that we should be aware of. AI exists to make your job obsolete because the alternative is that AI makes managers obsolete.
Let me try to break down this theory, which is very simple in its premise: those who control the technology will never allow it to eliminate their own positions. Instead, they'd rather weaponize it against everyone else, presenting it as inevitable progress. And I am utterly convinced that this theory is strongly connected to bullshit jobs.
In 2018, the late anthropologist David Graeber published his extensive investigation into a peculiar paradox of modern capitalism, which he coined as Bullshit Jobs. Despite centuries of technological progress promising to liberate us from drudgery, we have instead created vast bureaucratic structures filled with jobs that serve no social purpose. The people performing these jobs know their roles are meaningless. They spend their days shuffling papers, attending meetings about meetings, performing elaborate rituals of productivity while producing nothing of value.
And now, six years later, a glimmer of hope has finally emerged... Or so everyone hoped. Artificial intelligence arrived with the promise to finally solve the drudgery. Instead, everything got spectacularly worse. Some might ask at this point "exactly how?", to which I would ask back "have you been living under a rock?" Alas, let's see how.
## Foundations of extraction
The International Energy Agency projects that by 2030, AI will consume as much electricity as Japan currently uses. Absurd as it might sound, Bloomberg estimates AI will need almost as much power as India. Each data center requires millions of gallons of water daily for cooling. OpenAI's Project Ludicrous facility spans seventeen football fields. We're literally boiling the planet so that people can generate mediocre poetry and corporate communications that no one will ever read.
It's as if the society of spectacle has won and now everything is just a bad version of reality. And we cannot wake up. If you don't believe in the simulation theory and want a more pragmatic approach (poor you), this is simply technofeudalism. Access to any capabilities under the pretext of AI determines economic survival. Companies controlling foundational models of AI extract rents from every interaction with their systems. The subscription model ensures ongoing revenue streams (Ed Zitron might have a few words to say on that) while users never gain ownership of the tools they rely upon. This creates dependency relationships that mirror earlier forms of technological lock-in but at a much larger scale.
The rentier relationship of tech towards its users is now in a double bind. Users have to pay lucrative subscriptions in dollars and euros monthly or yearly. At the same time, their own data are harvested, and they themselves are getting deskilled while offloading critical cognitive processes to automated text prediction machines. Some of them, tech workers in particular, if we assume the statement "AI will kill your job" as being true, are working to make themselves obsolete (but this is another larger discussion, whether it can happen or not, and until that time the AI bubble might burst already).
With the current AI deployment, and the push left and right from companies for adoption, is creating new categories of human work that are even more meaningless than the original bullshit jobs Graeber documented. We now have entire professions dedicated to "prompt engineering": crafting the right input to get useful output from AI systems. We have AI trainers, responsible AI consultants, AI bias auditors, and AI transparency specialists. Each of these roles generates its own bureaucratic apparatus of reports, meetings, standards, and compliance procedures that no one will ever read. And of course, people are looking to find experts who can now clean the slop their vibe coding has created.
> "_AI has always been a marketing phrase that erodes scientific inquiry and scholarly discussion by design, leaving the door open to pseudoscience, exclusion, and surveillance._ "
>
> — **Olivia Guest et al****.**
In the academia, universities are scrambling to develop AI literacy curricula, which primarily means teaching students how to interact with chatbots more effectively. Students learn to craft prompts that produce acceptable essays. Professors deploy tools to that detect AI-generated text. Students learn to craft prompts that evade detection. This arms race consumes enormous amounts of human effort, energy, and planetary resources while producing nothing of value.
The discourse surrounding AI capturing the zeitgeist showcases the effort to capture futurity by capital. Sam Altman's recent blog post "The Gentle Singularity" (google it, I am not linking it) is a prime example of the tone of the discourse. He writes about "event horizons" and "digital superintelligence". At the same time he is hedging his bets, noting that in the 2030s people will still "swim in lakes" (lucky us). This rhetorical maneuver allows him to maintain the 'revolutionary' narrative that AI must maintain in order to atract capital, while preparing for the possibility that the technology delivers underwhelmingly less than promised.
## Managerial class preservation
To understand why AI goes after workers rather than managers, we need to remember the specific dynamics of the tech industry that created these systems. Tech companies spent decades convincing their workers they were engaged in world-changing labor. This "vocational awe", as Cory Doctorow calls it, successfully converted enormous amounts of unpaid labor from engineers who believed they were bringing forth a better technological age. These workers often suspended their personal lives to ship products on time, sustained by the belief that their sacrifice served humanity's greater good.
The irony is that many of these same engineers could command any salary, work anywhere, or negotiate any perk. They were immensely productive. Lines of code generated massive profits. But productivity alone wasn't enough. Companies needed these workers to stay at their desks for every possible hour. Vocational awe provided the ideological framework to extract maximum labor while making workers feel grateful for the privilege.
This arrangement worked wonders for the managerial class. It solved a fundamental problem of modern capitalism: how to extract maximum value from highly skilled workers who theoretically had significant bargaining power. The solution was ideological rather than coercive. Workers policed themselves, pushed themselves harder than any manager could demand, because they believed in the mission.
But the system had one critical vulnerability. If you convince someone they're saving the world, they tend to object when you order them to break the things they sacrificed their lives to build. Tech workers genuinely believed in their products. When executives demanded features designed to manipulate users, harvest data, or degrade the user experience in service of engagement metrics, many refused. The last line of defense against what Doctorow calls "enshittification".
To that end, the mass tech layoffs in 2022-2023 were a course correction with Meta announcing a 5% workforce reduction on the same day it doubled executive bonuses. The message was clear: your idealism will no longer be tolerated. Half a million tech workers have been laid off since 2023. Suddenly, those same engineers who once commanded six-figure salaries and lavish perks found themselves competing desperately for positions that pay less and offer fewer protections.
The remaining workers must now perform the jobs of their fired colleagues. Google founder Sergey Brin tells employees they should aim for a "sweet spot" of 60 hours per week. Amazon Web Services teams have been decimated. The threat of replacement by AI provides perfect cover for an intensification of exploitation. Executives claim AI makes workers more productive, so they need fewer of them. But a programmer working 60-hour weeks to cover for fired colleagues isn't more productive. They're just more exploited and likely burnt out. The productivity gains from AI are nonsensical (marginal at best).
Statistical text generation has legitimate applications, so the technology itself isn't inherently problematic. The problem lies in how it's being deployed and in the claims being made about its capabilities. When executives announce that AI will revolutionize their industries, they're declaring their intention to restructure labor relations in their favor, using AI as leverage.
ChatGPT was released in November 2022. Within months, companies across industries were implementing AI systems and cutting workforce numbers in droves. Did these organizations seriously do due diligence and evaluate the technology's actual capabilities and limitations? Did they conduct rigorous studies of how AI integration might improve their operations? Of course not. That would take a lot of time. They saw an opportunity to reduce costs and seized it. The AI narrative provided perfect cover to purge the overhiring they have done during COVID times.
## Enclosing human creativity
Large language models have, as Charlie Warzel puts it, "devoured the total creative output of humankind". This represents a massive enclosure of the digital commons. Writers, artists, and creators find their work incorporated into systems that have given them zero compensation, and are puking out extremely bad versions of their original work.
Technology companies present this as natural and inevitable when it's actually a choice about how we organize economic relations. They present themselves as responsibly managing risks while refusing to slow down development timelines. Critique gets incorporated into the promotional discourse. Acknowledging problems with AI bias, hallucinations, and job displacement becomes a way of demonstrating sophisticated understanding while continuing to accelerate deployment.
Meanwhile, actual productive work becomes harder to accomplish. Search engines now prioritize AI-generated content, and that in turn dilutes information quality. Publishing platforms see an influx of synthetic articles, which dilutes discovering meaningful content. Social media feeds are littered with AI-generated posts and chatbots trying to pass as regular people, which dilutes the creation of meaningful discourse about anything online. The technology that supposedly enhances productivity actually degrades our information ecologies that productive (and meaningful) work depends on. Meta recently announced entire feeds customized with only AI posts, because they had to stop their AI chatbots from talking about suicide to teenagers.
Recent research from BetterUp Labs and Stanford Social Media Lab has documented the degradation of productivity in precise terms. They've identified what they call "workslop": AI-generated content that masquerades as good work but lacks the substance to meaningfully advance any task. According to their survey of 1,150 U.S. employees, 40% report receiving workslop in the last month alone. These workers estimate that 15.4% of all content they receive at work qualifies as this kind of meaningless AI output.
What's interesting about workslop is how it transfers cognitive burden because when someone uses AI to generate a polished-looking report or presentation, they're not really eliminating work. They're shifting it downstream to colleagues who must decode, interpret, and often completely redo the work. Each incident of workslop costs an average of 1 hour and 56 minutes to deal with. For a company of 10,000 workers, this translates to over $9 million per year in lost productivity. The MIT Media Lab found that 95% of organizations see no measurable return on their AI investments. Now we know why.
## The costs of busywork
Kate Crawford writes that AI systems operate through a cycle of consumption and excretion that mirrors biological metabolism but serves capital rather than life. The training process devours billions of images, videos, and texts from the internet while excreting synthetic outputs that flood information ecosystems. This creates AI slop, which in turn cannibalizes the markets sustaining human content creators.
Multiple studies show that AI systems degenerate when fed too much of their own outputs, a phenomenon researchers call Model Autophagy Disorder (They literally go mad... he he). AI will literally eat itself and collapse into nonsense. Yet the response isn't to question the fundamental approach but to engineer "higher-quality synthetic data" and exploit underpaid human workers to supplement datasets.
Here's the crucial point that reveals the political nature of AI deployment: artificial intelligence could easily replace most management functions. Engagement metrics, OKRs, performance reviews, agile methods, innovation labs... You get the picture. These are exactly the kind of pattern-recognition and optimization problems that current AI could do well at solving. Now, I am not saying that this would be a good thing. I am just saying that this is a class dynamic that we should unpack. A properly configured system could theoretically perform the core functions of middle management more consistently and efficiently than humans.
What managers actually do? They aggregate information from multiple sources, identify patterns, make decisions based on those patterns, and communicate those decisions to others. They create reports that synthesize data. They attend meetings where they share these reports. They make projections based on historical trends. Every one of these activities falls squarely within the capabilities of current AI systems, and their work would seem more reliable if we focused on that. Or at least in theory.
But this will never happen. Not because of technological limitations, but because those deploying the technology are the managers themselves. They're not going to automate their own positions out of existence. Instead, they'll use AI to intensify surveillance of workers, automate the creative and meaningful aspects of work, and concentrate decision-making power even further up the hierarchy.
Workers are told that AI will eliminate jobs while being told to use it in their daily work. It creates productive anxiety, since workers must engage with technology that may render them obsolete. The constant discourse about disruption and transformation serves to naturalize precarity and prevent collective resistance.
When a professional uses AI to generate reports, presentations, or communications, they're disconnecting from the content they're responsible for. The result is a workforce that produces more output while understanding less about what they're producing. On the academia side, when a student uses ChatGPT to write an essay, they're not engaging with ideas or expressing thoughts. They're outsourcing the basic cognitive work that education is designed to cultivate.
Uncritical adoption of AI, will inevitably create people without critical thinking, and this may be a feature - not a bug, as it represents an attack on human agency itself. The capacity to think, write, and communicate effectively is fundamental to being a conscious participant in democratic society. When these capabilities are delegated to machines, we create a population that consumes and regurgitates information without actually processing or understanding it.
Research shows that workers who receive workslop view their colleagues as less creative, less capable, and less trustworthy. Fifty-four percent see the sender of an AI generated email as less creative. Forty-two percent see them as less trustworthy. One third of them report being less likely to work with that person again. The technology that promises to enhance collaboration is systematically destroying it. Workers spend their time navigating diplomatic responses to useless AI output rather than doing meaningful work together.
## The political economy of artificial stupidity
> _“AI” isn’t a tool or technology or even a cluster of technologies with a misleading name. It’s the infrastructure at the foundation of a form of capitalism dependent on data brokering. We should be teaching our students about this and not teaching them about “responsible” use._
> —Sonja Drimmer
OpenAI, Anthropic, and similar companies burn billions of dollars providing services that cost a lot, while they are losing money. They are funded by venture capital, which is betting on future monopolization. Once competitors are eliminated and users are dependent on their platforms, pricing will likely change. This is the standard playbook of platform capitalism: subsidize adoption, eliminate competition, exploit the captured market.
Big Tech is spending hundreds of billions on capital expenditures related to AI infrastructure, and in doing so, they're accumulating vast resources while controlling the means of AI production. The chasm between corporate spending and practical outcomes is staggering. This tells us the bubble (is it only one bubble?) serves exactly its intended purpose, which is to reorganize power relations under the cover of technological necessity.
The Harvard Business Review's own research confirms this. They found that despite companies doubling their AI use since 2023, with the number of companies claiming "fully AI-led processes" nearly doubling last year, the actual return on investment remains nonexistent. Workers follow mandates to adopt the technology because they have to, and thus they are creating the spectacle of adoption and progress. But the work produced is so degraded that it creates negative value. Professional services and technology sectors are disproportionately impacted, precisely the industries that should theoretically benefit most from AI enhancement.
Deloitte will provide a partial refund to the federal government over a $440,000 report that contained several errors, after admitting it used generative artificial intelligence to help produce it.
A different behemoth of problems, the environmental costs, remain largely hidden from public discourse. The infrastructural requirements for AI systems are enormous in terms of energy consumption and resources. Data centers require constant cooling, and they consume vast amounts of electricity and water. Costs are passed on to society while profits stay privatized. The cacophony of the AI-hype obscures the material reality of data centers' environmental harm.
Mark Fisher argued that capitalist realism operates by making it impossible to imagine alternatives to capitalism. The current moment offers us a sobering glimpse of this case. The future appears as something that happens to us rather than something we collectively choose. The discourse emphasizes the urgency of adoption while preventing the consideration of alternatives. The continuous use of 'disruption' and 'transformation' portrays resistance as backward-looking rather than reasonable. This **_temporal_** manipulation is crucial to maintaining consent. The hype machine is geared to persuade us we're living through a revolutionary moment in order to create FOMO.
Workers are right to be worried, but not for the reasons they think. They're not being replaced by superior intelligence. They're being systematically deskilled and made dependent on systems they don't fully shape, controlled by people who view them as obstacles to profit maximization.
## Futures tense
People are increasingly realizing that AI development is the result of power and resource decisions rather than natural evolution. The companies pushing "AI for social good" are the same ones causing or exacerbating the problems they claim to solve. Microsoft claims to be "safeguarding fundamental rights" while providing technological infrastructure to military operations in Gaza. Google says it wants to "drive positive change in underserved communities" while maintaining contracts with repressive governments as Abeba Birhane points out. These companies underreport their emissions by 662%.
The fundamental unreliability of these systems and their owners makes the talk about deployment for social challenges absurd. Independent evaluations show AI tools miss 31% of potentially lethal melanomas. AI summaries are only half as good as human ones. Large language models make "covertly racist decisions" based on dialects, flatten cultural complexity, and entrench Western biases by default. They're built on datasets containing racial slurs, hateful speech, and negative stereotypes, cleaned and filed by traumatized workers in the Global South paid a pittance to flag videos depicting atrocities.
Alternatives exist but remain marginalized by the dominance of Silicon Valley's extractive model. We do possess the technical capacity to create systems that are genuinely helpful and augment human capability. We could develop tools that enhance creativity rather than substituting it. But creating such systems would require starting from different premises, actually starting from scratch, prioritizing human and planetary thriving over capital accumulation. It would require a different kind of organizing to create these technologies. Something that would be post-capitalist.
~~AI~~ Managers will continue to target workers' jobs precisely because the alternative, eliminating management, would redistribute power downward rather than concentrating it upward. The machines aren't coming for our jobs anytime soon, although the people who control the machines have been coming for our jobs throughout history. It's just now that they're using artificial intelligence as a cover for the same old redistribution of wealth and power from workers to owners.
Until we can envision and construct other paths to dominant technological advancement, we will continue to perceive innovation as a form of oppression rather than emancipation.
* * *
## Sources
* Graeber, David. _Bullshit Jobs: A Theory_. 2018.
* Fisher, Mark. Capitalist Realism: Is There No Alternative? 2009
* Against the Uncritical Adoption of 'AI' Technologies in Academia
* How The Internet Died
* Future Shock: Grappling With the Generative AI Revolution
* AI-Generated “Workslop” Is Destroying Productivity
* The enshittification of tech jobs
* There isn’t an AI bubble—there are three
* AI Is a Mass-Delusion Event
* Eating the Future: The Metabolic Logic of AI Slop
* Large Language Muddle