AI has become as a deeply polarizing issue on the left, with many people having concerns regarding its reliance on unauthorized training data, displacement of workers, lack of creativity, and environmental costs. I’m going to argue that while these critiques warrant attention, they overlook the broader systemic context. As Marxists, our focus should not be on rejecting technological advancement but on challenging the capitalist framework that shapes its use. By reframing the debate, we can recognize AI’s potential as a tool for democratizing creativity and accelerating the contradictions inherent in capitalism.

Marxists have never opposed technological progress in principle. From the Industrial Revolution to the digital age, we have understood that technological shifts necessarily proletarianize labor by reshaping modes of production. AI is no exception. What distinguishes it is its capacity to automate aspects of cognitive and creative tasks such as writing, coding, and illustration that were once considered uniquely human. This disruption is neither unprecedented nor inherently negative. Automation under capitalism displaces workers, yes, but our critique must target the system that weaponizes progress against the workers as opposed to the tools themselves. Resisting AI on these grounds mistakes symptoms such as job loss for the root problem of capitalist exploitation.

Democratization Versus Corporate Capture

The ethical objection to AI training on copyrighted material holds superficial validity, but only within capitalism’s warped logic. Intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists. Disney’s ruthless copyright enforcement, for instance, sharply contrasts with its own history of mining public-domain stories. Meanwhile, OpenAI scraping data at scale, it exposes the hypocrisy of a system that privileges corporate IP hoarding over collective cultural wealth. Large corporations can ignore copyright without being held to account while regular people cannot. In practice, copyright helps capitalists far more than it help individual artists. Attacking AI for “theft” inadvertently legitimizes the very IP regimes that alienate artists from their work. Should a proletarian writer begrudge the use of their words to build a tool that, in better hands, could empower millions? The true conflict lies not in AI’s training methods but in who controls its outputs.

Open-source AI models, when decoupled from profit motives, democratize creativity in unprecedented ways. They enable a nurse to visualize a protest poster, a factory worker to draft a union newsletter, or a tenant to simulate rent-strike scenarios. This is no different from fanfiction writers reimagining Star Wars or street artists riffing on Warhol. It’s just collective culture remixing itself, as it always has. The threat arises when corporations monopolize these tools to replace paid labor with automated profit engines. But the paradox here is that boycotting AI in grassroots spaces does nothing to hinder corporate adoption. It only surrenders a potent tool to the enemy. Why deny ourselves the capacity to create, organize, and imagine more freely, while Amazon and Meta invest billions to weaponize that same capacity against us?

Opposing AI for its misuse under capitalism is both futile and counterproductive. Creativity critiques confuse corporate mass-production with the experimental joy of an individual sketching ideas via tools like Stable Diffusion. Our task is not to police personal use but to fight for collective ownership. We should demand public AI infrastructure to ensure that this technology is not hoarded by a handful of corporations. Surrendering it to capital ensures defeat while reclaiming it might just expand our arsenal for the fights ahead.

Creativity as Human Intent, Not Tool Output

The claim that AI “lacks creativity” misunderstands both technology and the nature of art itself. Creativity is not an inherent quality of tools — it is the product of human intention. A camera cannot compose a photograph; it is the photographer who chooses the angle, the light, the moment. Similarly, generative AI does not conjure ideas from the void. It is an instrument wielded by humans to translate their vision into reality. Debating whether AI is “creative” is as meaningless as debating whether a paintbrush dreams of landscapes. The tool is inert; the artist is alive.

AI has no more volition than a camera. When I photograph a bird in a park, the artistry does not lie in the shutter button I press or the aperture I adjust, but in the years I’ve spent honing my eye to recognize the interplay of light and shadow, anticipating the tilt of a wing, sensing the split-second harmony of motion and stillness. These are the skills that allow me to capture images such as this:

Hand my camera to a novice, and it is unlikely they would produce anything interesting with it. Generative AI operates the same way. Anyone can type “epic space battle” into a prompt, but without an understanding of color theory, narrative tension, or cultural symbolism, the result is generic noise. This is what we refer to as AI slop. The true labor resides in the human ability to curate and refine, to transform raw output into something resonant.

People who attack gen AI on the grounds of it being “soulless” are recycling a tired pattern of gatekeeping. In the 1950s, programmers derided high-level languages like FORTRAN as “cheating,” insisting real coders wrote in assembly. They conflated suffering with sanctity, as if the drudgery of manual memory allocation were the essence of creativity. Today’s artists, threatened by AI, make the same error. Mastery of Photoshop brushes or oil paints is not what defines art, it’s a technical skill developed for a particular medium. What really matters is the capacity to communicate ideas and emotions through a medium. Tools evolve, and human expression adapts in response. When photography first emerged, painters declared mechanical reproduction the death of art. Instead, it birthed new forms such as surrealism, abstraction, cinema that expanded what art could be.

The real distinction between a camera and generative AI is one of scope, not substance. A camera captures the world as it exists while AI visualizes worlds that could be. Yet both require a human to decide what matters. When I shot my bird photograph, the camera did not choose the park, the species, or the composition. Likewise, AI doesn’t decide whether a cyberpunk cityscape should feel dystopian or whimsical. That intent, the infusion of meaning, is irreplaceably human. Automation doesn’t erase creativity, all it does is redistribute labor. Just as calculators freed mathematicians from drudgery of arithmetic, AI lowers technical barriers for artists, shifting the focus to concept and critique.

The real anxiety over AI art is about the balance of power. When institutions equate skill with specific tools such as oil paint, Python, DSLR cameras, they privilege those with the time and resources to master them. Generative AI, for all its flaws, democratizes access. A factory worker can now illustrate their memoir and a teenager in Lagos can prototype a comic. Does this mean every output is “art”? No more than every Instagram snapshot is a Cartier-Bresson. But gatekeepers have always weaponized “authenticity” to exclude newcomers. The camera did not kill art. Assembly lines did not kill craftsmanship. And AI will not kill creativity. What it exposes is that much of what we associate with production of art is rooted in specific technical skills.

Finally, the “efficiency” objection to AI collapses under its own short-termism. Consider that just a couple of years ago, running a state-of-the-art model required data center full of GPUs burning through kilowatts of power. Today, DeepSeek model runs on a consumer grade desktop using mere 200 watts of power. This trajectory is predictable. Hardware optimizations, quantization, and open-source breakthroughs have slashed computational demands exponentially.

Critics cherry-pick peak resource use during AI’s infancy. Meanwhile, AI’s energy footprint per output unit plummets year-over-year. Training GPT-3 in 2020 consumed ~1,300 MWh; by 2023, similar models achieved comparable performance with 90% less power. This progress is the natural arc of technological maturation. There is every reason to expect that these trends will continue into the future.

Open Source or Oligarchy

To oppose AI as a technology is to miss the forest for the trees. The most important question is who will control these tools going forward. No amount of ethical hand-wringing will halt development of this technology. Corporations will chase AI for the same reason 19th-century factory owners relentlessly chased steam engines. Automation allows companies to cut costs, break labor leverage, and centralize power. Left to corporations, AI will become another privatized weapon to crush worker autonomy. However, if it is developed in the open then it has the potential to be a democratized tool to expand collective creativity.

We’ve seen this story before. The internet began with promises of decentralization, only to be co-opted by monopolies like Google and Meta, who transformed open protocols into walled gardens of surveillance. AI now stands at the same crossroads. If those with ethical concerns about AI abandon the technology, its development will inevitably be left solely to those without such scruples. The result will be proprietary models locked behind corporate APIs that are censored to appease shareholders, priced beyond public reach, and designed solely for profit. It’s a future where Disney holds exclusive rights to generate “fairytale” imagery, and Amazon patents “dynamic storytelling” tools for its Prime franchises. This is the necessary outcome when technology remains under corporate control. Under capitalism, innovation always serves monopoly power as opposed to the interests of the public.

On the other hand, open-source AI offers a different path forward. Stable Diffusion’s leak in 2022 proved this: within months, artists, researchers, and collectives weaponized it for everything from union propaganda to indigenous language preservation. The technology itself is neutral, but its application becomes a tool of class warfare. To fight should be for public AI infrastructure, transparent models, community-driven training data, and worker-controlled governance. It’s a fight for the means of cultural production. Not because we naively believe in “neutral tech,” but because we know the alternative is feudalistic control.

The backlash against AI art often fixates on nostalgia for pre-digital craftsmanship. But romanticizing the struggle of “the starving artist” only plays into capitalist myths. Under feudalism, scribes lamented the printing press; under industrialization, weavers smashed looms. Today’s artists face the same crossroads: adapt or be crushed. Adaptation doesn’t mean surrender, it means figuring out ways to organize effectively. One example of this model in action was when Hollywood writers used collective bargaining to demand AI guardrails in their 2023 contracts.

Artists hold leverage that they can wield if they organize strategically along material lines. What if illustrators unionized to mandate human oversight in AI-assisted comics? What if musicians demanded royalties each time their style trains a model? It’s the same solidarity that forced studios to credit VFX artists after decades of erasure.

Moralizing about AI’s “soullessness” is a dead end. Capitalists don’t care about souls, they care about surplus value. Every worker co-op training its own model, every indie game studio bypassing proprietary tools, every worker using open AI tools to have their voice heard chips away at corporate control. It’s materialist task of redistributing power. Marx didn’t weep for the cottage industries steam engines destroyed. He advocated for socialization of the means of production. The goal of stopping AI is not a realistic one, but we can ensure its dividends flow to the many, not the few.

The oligarchs aren’t debating AI ethics, they’re investing billions to own and control this technology. Our choice is to cower in nostalgia or fight to have a stake in our future. Every open-source model trained, every worker collective formed, every contract renegotiated is a step forward. AI won’t be stopped any more than the printing press and the internet before it. The machines aren’t the enemy. The owners are.

  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    While you can easily buy pen and paper, the problem is with having the time to invest in learning how to draw. AI is a more accessible tool that makes it possible for anyone to produce something decent looking based on an idea they have in their head. Also, the video I linked in the post shows how a group of professional artists trained AI on the style of one of the artists, and then used it as a style transfer tool to make it easier for the rest of the artists to collaborate and keep a consistent style. And that’s precisely the beauty of having open source models because you can train them on your own style.

    • amemorablename@lemmygrad.ml
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      The example you give of usefulness is valid. But I think it’s somewhat secondary to what I’m saying about the current state of AI (in particular, I’m focusing on image generation AI in this response). For the most part (if we’re talking the best of models and interfaces) anyone can easily produce something decent looking by conventional artistic standards, but this doesn’t necessarily get them any closer to producing the idea in their head. I hang out around a particular service, one that is at this stage one of the best in image generation, and though I would estimate the quantity of people coming with questions about how to do stuff has lowered as the models have improved, there are for sure still occasions where something that is relatively simple in concept for a human, is not something that is easy to get out of the model in question, if it can be done at all. There are still some pretty significant hurdles on two points, with that: one being datasetting and, much like a human, an AI not being able to do something it was never shown how to do (and the difficulties in amassing and labeling data to cover as much as possible), and the other being interfacing with the model - much of the process is still stuck in Text to Image, which is also often still heavily limited to English for the best results, if other languages are even accommodated for at all.

      So yes, there are cases where it can help, I’m not contesting that. But it’s not always a straightforward time-saver either. In practice, from everything I have seen, it is a lot more messy and limited than it is sometimes made out to be on the surface. In order to properly grapple with how AI would be usefully integrated into society, not just pushed on people by capitalists or leveraged by a few professional artists here and there, we need to be clear on the mechanics of it, both in potential and in realistic messiness.

      I know some of these things will get improved upon as time goes on, but it will be improvements through research and acknowledging of limitations, not because of inevitability; we don’t really know yet what the full limits of the architecture are and when fundamentally different architectures will be needed to go beyond current limitations. I think there is still a lot that can be improved upon via interfacing alone, without even thinking about the models themselves, but either way, my general point is that the jank is real and it does not appear to be on track to go anywhere any time soon. So we need to be careful not to oversell how capable or easy to use these things are.

      There’s also just the fact that a phrase like “democratizing creativity” implies you can’t be creative without really specific tools or skillsets, and that’s a very specific view of what creativity is, which is where some of my ick at the phrasing comes from. Another view of creativity is that it’s something inherent to being human and that tools and skillsets give form to particular expressions of creativity, but are not themselves creativity. Furthermore, I’d argue that part of the reason a Julliard-trained, decades of experience musician is uplifted to the degree they are has some ties to class structure; that it’s not just about merit per se in the sense of producing works that people like, but about defining a particular path to being considered “great” and then artificially limiting the number of people who can get through the gates. As communists, I would think part of our mission is to dismantle those sort of notions, to make it more about merit and social good, and if you define it more as merit, you start getting into a hazier realm where you don’t necessarily have to spend 100 hours drawing a detailed portrait and going through all the social steps to “prove” you are a “good artist”, you just need to make stuff that adds value to people’s lives or contributes to social well-being in some way. In this sense, I think what image generation is doing, is more so challenging the notion of “it looks high effort, so it must have been done by a prestigious artist”. But if image generation ends up dismantling this narrative within capitalism, I think we should expect that capitalism will try to shift the goalposts and find another means of saying what defines the best art, while still gatekeeping, if for no other reason than how closely linked the arts are to propaganda and narrative control over mainstream dissemination of the arts.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago

        I’d argue that we have lots of examples of people producing ideas in their heads with AI now. Here’s what I’d consider a perfect example that I saw today:

        It does an excellent job of getting the point across. Before gen AI, only select few people with artistic skill would’ve been able to make this sort of agitprop.

        I very much agree that a lot more can be done with exploring better interfaces for these tools without even changing how underlying models work. The workflows will likely continue getting more sophisticated as well. Tools like control nets in ComfyUI are a good example of something that requires skill and learning to use effectively, but gives the user far more control than just typing text into a prompt. I can see these sorts of tools being used by professionals in a similar way to the way 3D modelling tools are used today.

        The battle over the narrative is also an important point you bring up. We should not let capitalists define what art is for us, or what is desirable, or valuable. Shaping how people perceive art and value is ultimately a big part of capitalist narrative.

        • amemorablename@lemmygrad.ml
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          Fair points. Also, I’ve noticed some people seem to be sour on the idea of it being used for propaganda and to that I would say: I’ve never seen someone complain about memes being used for such and memes are at the point you can often just type in some text and click save using a generator. So I don’t quite understand what’s going on there. Cause memes seem very similar in that you’re remixing from a template someone already did and typing some text, and that’s it.