theluddite

joined 1 year ago
MODERATOR OF
[–] theluddite@lemmy.ml 15 points 1 month ago (5 children)

This app fundamentally misunderstands the problem. Your friend sets you up on a date. Are you going to treat that person horribly. Of course not. Why? First and foremost, because you're not a dick. Your date is a human being who, like you, is worthy and deserving of basic respect and decency. Second, because your mutual friendship holds you accountable. Relationships in communities have an overlapping structure that mutually impact each other. Accountability is an emergent property of that structure, not something that can be implemented by an app. When you meet people via an app, you strip both the humanity and the community, and with it goes the individual and community accountability.

I've written about this tension before: As we use computers more and more to mediate human relationships, we'll increasingly find that being human and doing human things is actually too complicated to be legible to computers, which need everything spelled out in mathematically precise detail. Human relationships, like dating, are particularly complicated, so to make them legible to computers, you necessarily lose some of the humanity.

Companies that try to whack-a-mole patch the problems with that will find that their patches are going to suffer from the same problem: Their accountability structure is a flat shallow version of genuine human accountability, and will itself result in pathological behavior. The problem is recursive.

[–] theluddite@lemmy.ml 3 points 2 months ago

That would be a really fun project! It almost reads like the setup for a homework problem for a class on chaos and nonlinear dynamics. I bet that as the model increasingly takes into account other people's (supposed?) preferences, you get qualitative breaks in behavior.

Stuff like this is why I come back to postmodernists like Baudrillard and Debord time and time again. These kinds of second- (or Nth-) order "news" are an artifact of the media's constant and ever-accelerating commodification of reality. They just pile on more and more and more until we struggle to find reality through the sheer weight of its representations.

[–] theluddite@lemmy.ml 13 points 2 months ago (6 children)

Really liked this articulation that someone shared with me recently:

here's something you need to know about polls and the media: we pay for polls so we can can write stories about polls. We're paying for a drumbeat to dance to. This isn't to say polls are unscientific, or false, or misleading: they're generally accurate, even if the content written around marginal noise tends to misrepresent them. It's to remind you that when you're reading about polls, you're watching us hula hoop the ourobouros. Keep an eye out for poll guys boasting about their influence as much as their accuracy. That's when you'll know the rot has reached the root, not that there's anything you can do about it.

[–] theluddite@lemmy.ml 1 points 2 months ago

Journalists actually have very weird and, I would argue, self-serving standards about linking. Let me copy paste from an email that I got from a journalist when I emailed them about relying on my work but not actually citing it:

I didn't link directly to your article because I wasn't able to back up some of the claims made independently, which is pretty standard journalistic practice

In my opinion, this is a clever way to legitimize passing off research as your own, which is definitely what they did, up to and including repeating some very minor errors that I made.

I feel similarly about journalistic ethics for not paying sources. That's a great way to make sure that all your sources are think tank funded people who are paid to have opinions that align with their funding, which is exactly what happens. I understand that paying people would introduce challenges, but that's a normal challenge that the rest of us have to deal with every fucking time we hire someone. Journalists love to act like people coming forth claiming that they can do X or tell them about Y is some unique problem that they face, when in reality it's just what every single hiring process exists to sort out.

[–] theluddite@lemmy.ml 29 points 2 months ago* (last edited 2 months ago)

I have now read so many "ChatGPT can do X job better than workers" papers, and I don't think that I've ever found one that wasn't at least flawed if not complete bunk once I went through the actual paper. I wrote about this a year ago, and I've since done the occasional follow-up on specific articles, including an official response to one of the most dishonest published papers that I've ever read that just itself passed peer review and is awaiting publication.

That academics are still "bench-marking" ChatGPT like this, a full year after I wrote that, is genuinely astounding to me on so many levels. I don't even have anything left to say about it at this point. At least fewer of them are now purposefully designing their experiments to conclude that AI is awesome, and are coming to the obvious conclusion that ChatGPT cannot actually replace doctors, because of course it can't.

This is my favorite one of these ChatGPT-as-doctor studies to date. It concluded that "GPT-4 ranked higher than the majority of physicians" on their exams. In reality, it actually can't do the exam, so the researchers made a special, ChatGPT-friendly version of the exam for the sole purpose of concluding that ChatGPT is better than humans.

Because GPT models cannot interpret images, questions including imaging analysis, such as those related to ultrasound, electrocardiography, x-ray, magnetic resonance, computed tomography, and positron emission tomography/computed tomography imaging, were excluded.

Just a bunch of serious doctors at serious hospitals showing their whole ass.

[–] theluddite@lemmy.ml 31 points 3 months ago* (last edited 3 months ago) (2 children)

Not directly to your question, but I dislike this NPR article very much.

Mwandjalulu dreamed of becoming a carpenter or electrician as a child. And now he's fulfilling that dream. But that also makes him an exception to the rule. While Gen Z — often described as people born between 1997 and 2012 — is on track to become the most educated generation, fewer young folks are opting for traditionally hands-on jobs in the skilled trade and technical industries.

The entire article contains a buried classist assumption. Carpenters have just as much a reason to study theater, literature, or philosophy as, say, project managers at tech companies (those three examples are from PMs that I've worked with). Being educated and a carpenter are only in tension because of decisions that we've made, because having read Plato has as much in common with being a carpenter as it does with being a PM. Conversely, it would be fucking lit if our society had the most educated plumbers and carpenters in the world.

NPR here is treating school as job training, which is, in my opinion, the root problem. Job training is definitely a part of school, but school and society writ large have a much deeper relationship: An educated public is necessary for a functioning democracy. 1 in 5 Americans is illiterate. If we want a functioning democracy, then we need to invest in everyone's education for its own sake, rather than treat it as a distinguishing feature between lower classes and upper ones, and we need to treat blue collar workers as people who also might wish to be intellectually fulfilled, rather than as a monolithic class of people who have some innate desire to work with their hands and avoid book learning (though those kinds of people need also be welcomed).

Occupations such as auto technician with aging workforces have the U.S. Chamber of Commerce warning of a "massive" shortage of skilled workers in 2023.

This is your regular reminder that the Chamber of Commerce is a private entity that represents capital. Everything that they say should be taken with a grain of salt. There's a massive shortage of skilled workers for the rates that businesses are willing to pay, which has been stagnant for decades as corporate profits have gone up. If you open literally any business and offer candidates enough money, you'll have a line out the door to apply.

[–] theluddite@lemmy.ml 6 points 4 months ago* (last edited 4 months ago)

This is a frustrating piece. Anyone with even a passing knowledge of history knows that you can't just report on what fascist movements say then fact check it (which is what WaPo is doing here). JD Vance doesn't give a single shit about workers, and the facts don't matter. It's about aesthetics. The American fascist movement, like all such movements, is interested in appropriating the very real grievances of workers into a spectacle that serves power rather than challenges it. Walter Benjamin calls this the aestheticization of politics.

Fascism attempts to organize the newly proletarianized masses without affecting the property structure which the masses strive to eliminate. Fascism sees its salvation in giving these masses not their right, but instead a chance to express themselves. The masses have a right to change property relations; Fascism seeks to give them an expression while preserving property. The logical result of Fascism is the introduction of aesthetics into political life.

[–] theluddite@lemmy.ml 6 points 4 months ago

This article is a mess. Brief summary of the argument:

  • AI relies on our collective data, therefore it should be collectively owned.
  • AI is going to transform our lives
  • AI has meant a lot of things over the years. Today it mostly means LLMs.
  • The problems with AI are actually problems with capitalism
  • Socialist AI could be democratically accountable, compensate people from whom they use data, etc.
  • Socialists have always held that technology should be liberatory, and we should view AI the same way
  • Some ideas for how to govern AI

I think that this argument is sloppily made, but I'm going to read it generously for the purposes of this comment and focus on my single biggest disagreement: It misunderstands why LLMs are such a big deal under capitalism, because it misunderstands the interplay between technology and power. There is no such thing as a technological revolution. Revolutions happen within human institutions, and technologies change what is possible in the ongoing and continuous renegotiation of power within them. LLMs appear useful because we live under capitalism, and we think about technology within a capitalist framework. Their primary use case is to allow capitalists to exert more power over labor.

The author compares LLMs to machines in a factory, but machines produce things, and LLMs produce language. Most jobs involve producing language as a necessary byproduct of human collaboration. As a result, LLMs allow capitalists to discipline labor because they can "do" some enormous percentage of most jobs, if you think about human collaboration in the same way that you think about factories. The problem is that human language is not a modular widget that you can make with a machine. You can't automate away the communication within human collaboration.

So, I think that author makes a dangerous category error when they compare LLMs to factory machines. That is how capitalists want us to think of LLMs because it allows them to wield them as a threat to push wages down. That is their primary use case. Once you remove the capitalist/labor power dynamic, then LLMs lose much of their appeal and become just another example of for profit companies mining public goods for private profit. They're not a particularly special case, so I don't think that it requires the special treatment in the way that the author lays out, but I agree that companies shouldn't be allowed to do that.

I have a lot of other problems with this article, which can be found in my previous writing, if that interests you:

[–] theluddite@lemmy.ml 118 points 4 months ago* (last edited 4 months ago) (6 children)

Investment giant Goldman Sachs published a research paper

Goldman Sachs researchers also say that

It's not a research paper; it's a report. They're not researchers; they're analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word "research" for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI "research" that's just them poking at their own product but dressed up in a science-lookin' paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I've written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would've noticed that it's actually junk science.

[–] theluddite@lemmy.ml 5 points 4 months ago

Happy to be of service!

I don’t know enough about their past to comment on that.

I highly recommend Herman and Chomsky's book, Manufacturing Consent. It's about exactly this.

[–] theluddite@lemmy.ml 5 points 4 months ago (1 children)

But at least the way I read it, Bennet is saying that the NYT has a duty to help both sides understand each other, and the way to do that would be by giving a voice to the right and centrists without necessarily endorsing any faction

I think that this is a superficially pleasing argument but actually quite dangerous. It ignores that the NYT is itself quite powerful. Anything printed in the NYT is instantly given credibility, so it's actually impossible for them to stay objective and not take sides. Taking an army out to quash protestors gets normalized when it appears in the NYT, which is a point for that side of the argument, but the NYT can't publish every side of every issue. There's not enough space on the whole internet for that. This is why we have that saying that I mentioned in the other comment, that journalists should afflict the comfortable and comfort the afflicted, or that journalists ought to speak truth to power. Since it's simply impractical to be truly neural, in the sense of publishing every side of every issue, a responsible journalist considers the power dynamics to decide which sides need airing.

The author of the OP argues that, because Cotton is already a very influential person, he ought to be published in the NYT, but I think that the exact opposite is true. Because Cotton is already an influential person, he has plenty of places that he can speak, and when the NYT platforms his view that powerful people like him should oppress those beneath them, they do a disservice to their society by implicitly endorsing that as something more worthy of publishing than the infinite other things that they could publish. For literally all of history, it's been easy to hear the opinions of those who wield violence to suppress dissent. Journalism is special only when it goes against power.

1
submitted 11 months ago* (last edited 11 months ago) by theluddite@lemmy.ml to c/luddite@lemmy.ml
view more: ‹ prev next ›