this post was submitted on 18 Jul 2024
17 points (100.0% liked)

Socialism

2845 readers
1 users here now

Beehaw's community for socialists, communists, anarchists, and non-authoritarian leftists (this means anti-capitalists) of all stripes. A place for all leftist and labor news and discussion, as long as you're nice about it.


Non-socialists are welcome to come to learn, though it's hard to get to in-depth discussions if the community is constantly fighting over the basics. We ask that non-socialists please be respectful and try not to turn this into a "left vs right" debate forum by asking leading questions or by trying to draw others into a fight.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 1 year ago
MODERATORS
top 1 comments
sorted by: hot top controversial new old
[–] theluddite@lemmy.ml 6 points 4 months ago

This article is a mess. Brief summary of the argument:

  • AI relies on our collective data, therefore it should be collectively owned.
  • AI is going to transform our lives
  • AI has meant a lot of things over the years. Today it mostly means LLMs.
  • The problems with AI are actually problems with capitalism
  • Socialist AI could be democratically accountable, compensate people from whom they use data, etc.
  • Socialists have always held that technology should be liberatory, and we should view AI the same way
  • Some ideas for how to govern AI

I think that this argument is sloppily made, but I'm going to read it generously for the purposes of this comment and focus on my single biggest disagreement: It misunderstands why LLMs are such a big deal under capitalism, because it misunderstands the interplay between technology and power. There is no such thing as a technological revolution. Revolutions happen within human institutions, and technologies change what is possible in the ongoing and continuous renegotiation of power within them. LLMs appear useful because we live under capitalism, and we think about technology within a capitalist framework. Their primary use case is to allow capitalists to exert more power over labor.

The author compares LLMs to machines in a factory, but machines produce things, and LLMs produce language. Most jobs involve producing language as a necessary byproduct of human collaboration. As a result, LLMs allow capitalists to discipline labor because they can "do" some enormous percentage of most jobs, if you think about human collaboration in the same way that you think about factories. The problem is that human language is not a modular widget that you can make with a machine. You can't automate away the communication within human collaboration.

So, I think that author makes a dangerous category error when they compare LLMs to factory machines. That is how capitalists want us to think of LLMs because it allows them to wield them as a threat to push wages down. That is their primary use case. Once you remove the capitalist/labor power dynamic, then LLMs lose much of their appeal and become just another example of for profit companies mining public goods for private profit. They're not a particularly special case, so I don't think that it requires the special treatment in the way that the author lays out, but I agree that companies shouldn't be allowed to do that.

I have a lot of other problems with this article, which can be found in my previous writing, if that interests you: