this post was submitted on 21 Jun 2024
131 points (95.2% liked)

Fuck AI

1398 readers
446 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 

OpenAI's Mira Murati: "some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place" And you stole everything from creative people who provided free texts, images, forum answers, etc. To date, your company has refused to acknowledge any credit. Rich people truly live in their bubble and have zero sympathy for fellow human or their livelihood.

you are viewing a single comment's thread
view the rest of the comments
[–] MudMan@fedia.io 4 points 4 months ago* (last edited 4 months ago) (1 children)

OK, so one caveat and one outright disagreement there.

The caveat is that she herself points out that nobody knows whether the jobs created will outnumber the jobs destroyed, or perhaps just be even and result in higher quality jobs. She points out there is no rigorous research on this, and she's not wrong. There's mostly either panic or giddy, greedy excitement.

The disagreement is that no, AI won't destroy jobs it's learning from. Absolutely no way. It's nowhere near good enough for that. Weirdly, Murati is way more realistic about this than the average critic, who seems to mostly have bought into the hype from the average techbro almost completely.

Murati's point is you can only replace jobs that are entirely repetitive. You can perhaps retopologize a mesh, code a loop, marginally improve on the current customer service bots.

The moment there is a decision to be made, an aesthetic choice or a bit of nuance you need a human. We have no proof that you will not need a human or that AI will get better and fill that blank. Technology doesn't scale linearly.

Now, I concede that only applies if you want the quality of the product to stay consistent. We've all seen places where they don't give a crap about that, so listicle peddlers now have one guy proofreading reams of AI generated garbage. And we've all noticed how bad that output is. And you're not wrong in that the poor guy churning those out before AI did need that paycheck and will need a new job. But if anything that's a good argument for conusming media that is... you know, good? From that perspective I almost see the "that job shouldn't have existed" point, honestly.

[–] octopus_ink@lemmy.ml 5 points 4 months ago (1 children)

The caveat is that she herself points out that nobody knows whether the jobs created will outnumber the jobs destroyed, or perhaps just be even and result in higher quality jobs. She points out there is no rigorous research on this, and she’s not wrong. There’s mostly either panic or giddy, greedy excitement.

Even if we take as settled the concept that more jobs will exist in aggregate, I'm doubtful that there's a likely path for most of the first wave (at least) of people whose jobs are destroyed into one of those jobs "created" by AI. I have nothing to back this up but my gut, however in this case I feel pretty good about that assertion. My point is that their personal tragedy at losing their job is in most cases not going to be alleviated by the new jobs created by this advancement.

We have no proof that you will not need a human or that AI will get better and fill that blank. Technology doesn’t scale linearly.

I've seen recent AI porn images and I saw what Deepdream was doing a few years ago. I don't see a reason to think we can't expect it to get better based on that. 🙂 I also acknowledge that these may be apples and oranges even more than I suspect they are.

As someone who works in IT (though as I'm sure you can tell I have no expertise whatsoever in machine learning), I still tend to strongly agree with this statement from @maegul@lemmy.ml :

On the other hand, the tech industry’s overriding presumption that disruption by tech is a natural good and that they’re correctly placed as the actuators of that “good” really needs a lot more mainstream push back.

[–] MudMan@fedia.io 1 points 4 months ago (2 children)

Every industrial transition generates that, though. Forget the Industrial Revolution, these people love to be compared to that. Think of the first transition to data-driven businesses or the gig economy. Yeah, there's a chunk of people caught in the middle that struggle to shift to the new model in time. That's why you need strong safety nets to help people transition to new industries or at least to give them a dignified retirement out of the workforce. That's neither here nor there, if it's not AI it'll be the next thing.

About the linear increase path, that reasoning is the same old Moore's law trap. Every line going up keeps going up if you keep drawing it with the same slope forever. In nature and economics lines going up tend to flatten again at some point. The uncertainty is whether this line flattens out at "passable chatbots you can't really trust" or it goes to the next step after that. Given what is out there about the pace of improvement and so on, I'd say we're probably close to progress becoming incremental, but I don't think anybody knows for sure yet.

And to be perfectly clear, this is not the same as saying that all tech disruption is good. Honestly, I don't think tech disruption has any morality of any kind. Tech is tech. It defines a framework for enterprise, labor and economics. Every framework needs regulation and support to make it work acceptably because every framework has inequalities and misbehaviors. You can't regulate data capitalism the way you did commodities capitalism and that needed a different framework than agrarian societies and so on. Genies don't get put back in bottles, you just learn to regulate and manage the world they leave behind when they come out. And, if you catch it soon enough, maybe you get to it in time to ask for one wish that isn't just some rich guy's wet dream.

[–] nickwitha_k@lemmy.sdf.org 4 points 4 months ago (1 children)

Think of the first transition to data-driven businesses or the gig economy.

Just a clarification: the "gig economy" was not "new" in any way, just using new technology to skirt around labor laws and find loopholes in regulations in order to claw back profits that had been "lost" to things like pensions and health coverage.

[–] MudMan@fedia.io 2 points 4 months ago (1 children)

Well, yeah, that's what I'm talking about here, specifically. There was an application of technology that bypassed regulations put in place to manage a previous iteration of that technology and there was a period of lawlessness that then needed new regulation. The solutions were different in different places. Some banned the practice, some equated it with employees, some with contractors, some made custom legislation.

But ultimately the new framework needed regulation just like the old framework did. The fiction that the old version was inherently more protected is an illusion created by the fact that we were born after common sense guardrails were built for that version of things.

AI is the same. It changes some things, we're gonna need new tools to deal with the things it changes. Not because it's worse, but because it's the same thing in a new wrapper.

[–] nickwitha_k@lemmy.sdf.org 2 points 4 months ago

Thank you for clarifying. Definitely agree on this. Especially with regards to the perceived guardrails.

[–] octopus_ink@lemmy.ml 2 points 4 months ago (1 children)

That’s why you need strong safety nets to help people transition to new industries or at least to give them a dignified retirement out of the workforce. That’s neither here nor there, if it’s not AI it’ll be the next thing.

I agree with most of what you wrote in this paragraph, but we have no such strong safety nets. I don't think the fact that it has happened previously is justification for creating those circumstances again now (or in the future) without concern for how it impacts people. We're supposed to be getting better as time goes by. (not that we are by many other metrics I can see on a daily basis, but as you say that's another conversation)

Genies don’t get put back in bottles, you just learn to regulate and manage the world they leave behind when they come out. And, if you catch it soon enough, maybe you get to it in time to ask for one wish that isn’t just some rich guy’s wet dream.

I also agree with this.

But, I find there is plenty of justification to push back and try to slow the proliferation of AI in certain areas while our laws and morality try to catch up.

[–] MudMan@fedia.io 2 points 4 months ago (1 children)

Your "we" and my "we" are probably not the same, I'm afraid. I'm not shocked that the difference in context would result in a difference of perception, but I'd argue that you guys would need an overhaul on the regulations and safety nets thing regardless.

[–] octopus_ink@lemmy.ml 3 points 4 months ago* (last edited 4 months ago)

Fair point, this ol' nation needs a new set of spark plugs and a valve job, at a minimum. :)

Edit: DAMMIT how am I a moderator again @VerbFlow@lemmy.world? Removing myself again now.