this post was submitted on 25 Jul 2024
1007 points (97.5% liked)

Technology

59582 readers
2542 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI's impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

top 50 comments
sorted by: hot top controversial new old
[–] barsquid@lemmy.world 169 points 4 months ago (6 children)

Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn't help people finish their tasks.

[–] Etterra@lemmy.world 37 points 4 months ago (1 children)

It's like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

load more comments (1 replies)
load more comments (5 replies)
[–] FartsWithAnAccent@fedia.io 106 points 4 months ago* (last edited 4 months ago) (13 children)

They tried implementing AI in a few our our systems and the results were always fucking useless. What we call "AI" can be helpful in some ways but I'd bet the vast majority of it is bullshit half-assed implementations so companies can claim they're using "AI"

[–] DragonTypeWyvern@midwest.social 33 points 4 months ago (1 children)

The one thing "AI" has improved in my life has been a banking app search function being slightly better.

Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You're telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

[–] MindTraveller@lemmy.ca 30 points 4 months ago (2 children)

Generating multiple pictures of the same character is actually pretty hard. For example, let's say you're making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We'll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.

And all of this is multiplied ten times over if you want granular changes to a character. Let's say you're making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You're going to need to be extremely specific with the AI and it's probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn't understand the assignment well enough.

load more comments (2 replies)
load more comments (12 replies)
[–] lvxferre@mander.xyz 84 points 4 months ago (1 children)

Large "language" models decreased my workload for translation. There's a catch though: I choose when to use it, instead of being required to use it even when it doesn't make sense and/or where I know that the output will be shitty.

And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.

[–] bitfucker@programming.dev 11 points 4 months ago (5 children)

I always said this in many forums yet people can't accept that the best use case of LLM is translation. Even for language such as japanese. There is a limit for sure, but so does human translation without adding many more texts to explain the nuance in the translation. At that point an essay is needed to dissect out the entire meaning of something and not just translation.

load more comments (5 replies)
[–] GreatAlbatross@feddit.uk 77 points 4 months ago (8 children)

The workload that's starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

"But it works!"

'It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library'

[–] JackbyDev@programming.dev 35 points 4 months ago (3 children)

I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

[–] GreatAlbatross@feddit.uk 20 points 4 months ago (8 children)

Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

load more comments (8 replies)
load more comments (2 replies)
[–] andallthat@lemmy.world 14 points 4 months ago (2 children)

TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do

[–] skillissuer@discuss.tchncs.de 15 points 4 months ago

yay!! do more stupid shit faster and with more baseless confidence!

load more comments (1 replies)
load more comments (6 replies)
[–] Nobody@lemmy.world 63 points 4 months ago (9 children)

You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

load more comments (9 replies)
[–] cheddar@programming.dev 56 points 4 months ago* (last edited 4 months ago) (2 children)

Me: no way, AI is very helpful, and if it isn't then don't use it

created challenges in achieving the expected productivity gains

achieving the expected productivity gains

Me: oh, that explains the issue.

[–] Bakkoda@sh.itjust.works 23 points 4 months ago (4 children)

It's hilarious to watch it used well and then human nature just kick in

We started using some "smart tools" for scheduling manufacturing and it's honestly been really really great and highlighted some shortcomings that we could easily attack and get easy high reward/low risk CAPAs out of.

Company decided to continue using the scheduling setup but not invest in a single opportunity we discovered which includes simple people processes. Took exactly 0 wins. Fuckin amazing.

load more comments (4 replies)
load more comments (1 replies)
[–] PanArab@lemm.ee 49 points 4 months ago* (last edited 4 months ago) (2 children)

The trick is to be the one scamming your management with AI.

“The model is still training…”

“We will solve this with Machine Learning”

“The performance is great on my machine but we still need to optimize it for mobile devices”

Ever since my fortune 200 employer did a push for AI, I haven’t worked a day in a week.

[–] ICastFist@programming.dev 11 points 4 months ago

Not working and getting paid? Sounds like you just became a high level manager

load more comments (1 replies)
[–] Sk1ll_Issue@feddit.nl 31 points 4 months ago (1 children)

The study identifies a disconnect between the high expectations of managers and the actual experiences of employees

Did we really need a study for that?

load more comments (1 replies)
[–] MonkderVierte@lemmy.ml 26 points 4 months ago (1 children)

The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

load more comments (1 replies)
[–] TrickDacy@lemmy.world 25 points 4 months ago (13 children)

AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn't made my workload more...

[–] Cryophilia@lemmy.world 34 points 4 months ago (1 children)

Probably because the vast majority of the workforce does not work in tech but has had these clunky, failure-prone tools foisted on them by tech. Companies are inserting AI into everything, so what used to be a problem that could be solved in 5 steps now takes 6 steps, with the new step being "figure out how to bypass the AI to get to the actual human who can fix my problem".

I've thought for a long time that there are a ton of legitimate business problems out there that could be solved with software. Not with AI. AI isn't necessary, or even helpful, in most of these situations. The problem is that creatibg meaningful solutions requires the people who write the checks to actually understand some of these problems. I can count on one hand the number of business executives that I've met who were actually capable of that.

[–] HakFoo@lemmy.sdf.org 15 points 4 months ago (3 children)

They've got a guy at work whose job title is basically AI Evangelist. This is terrifying in that it's a financial tech firm handling twelve figures a year of business-- the last place where people will put up with "plausible bullshit" in their products.

I grudgingly installed the Copilot plugin, but I'm not sure what it can do for me better than a snippet library.

I asked it to generate a test suite for a function, as a rudimentary exercise, so it was able to identify "yes, there are n return values, so write n test cases" and "You're going to actually have to CALL the function under test", but was unable to figure out how to build the object being fed in to trigger any of those cases; to do so would require grokking much of the code base. I didn't need to burn half a barrel of oil for that.

I'd be hesitant to trust it with "summarize this obtuse spec document" when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn't suitable.

Maybe the problem is that I'm too close to the specific problem. AI tooling might be better for open-ended or free-association "why not try glue on pizza" type discussions, but when you already know "send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW" having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

I can see the marketing and sales people love it, maybe customer service too, click one button and take one coherent "here's why it's broken" sentence and turn it into 500 words of flowery says-nothing prose, but I demand better from my machine overlords.

Tell me when Stable Diffusion figures out that "Carrying battleaxe" doesn't mean "katana randomly jutting out from forearms", maybe at that point AI will be good enough for code.

load more comments (3 replies)
[–] Cosmicomical@lemmy.world 15 points 4 months ago

For anything more that basic autocomplete, copilot has only given me broken code. Not even subtly broken, just stupidly wrong stuff.

load more comments (10 replies)
[–] iAvicenna@lemmy.world 25 points 4 months ago

because on top of your duties you now have to check whatever the AI is doing in place of the employee it has replaced

[–] alienanimals@lemmy.world 23 points 4 months ago

The billionaire owner class continues to treat everyone like shit. They blame AI and the idiots eat it up.

[–] _sideffect@lemmy.world 22 points 4 months ago (1 children)

Lmao, so instead of ai taking our jobs, it made us MORE jobs.

Thanks, "ai"!

[–] kent_eh@lemmy.ca 18 points 4 months ago

Except it didn't make more jobs, it just made more work for the remaining employees who weren't laid off (because the boss thought the AI could let them have a smaller payroll)

[–] Hackworth@lemmy.world 20 points 4 months ago (34 children)

I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

[–] cheese_greater@lemmy.world 25 points 4 months ago (2 children)

Have you tripled your billing/salary? Stop being a scab lol

load more comments (2 replies)
load more comments (33 replies)
[–] pineapplelover@lemm.ee 17 points 4 months ago (4 children)

If used correctly, AI can be helpful and can assist in easy and menial tasks

[–] jjjalljs@ttrpg.network 22 points 4 months ago (5 children)

I mean if it's easy you can probably script it with some other tool.

"I have a list of IDs and need to make them links to our internal tool's pages" is easy and doesn't need AI. That's something a product guy was struggling with and I solved in like 30 seconds with a Google sheet and concatenation

load more comments (5 replies)
[–] hswolf@lemmy.world 14 points 4 months ago

It also helps you getting a starting point when you don't know how ask a search engine the right question.

But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.

load more comments (2 replies)
[–] tvbusy@lemmy.dbzer0.com 16 points 4 months ago

This study failed to take into consideration the need to feed information to AI. Companies now prioritize feeding information to AI over actually making it usable for humans. Who cares about analyzing the data? Just give it to AI to figure out. Now data cannot be analyzed by humans? Just ask AI. It can't figure out? Give it more so it can figure it out. Rinse, repeat. This is a race to the bottom where information is useless to humans.

[–] JohnnyH842@lemmy.world 14 points 4 months ago (1 children)

Admittedly I only skimmed the article, but I think one of the major problems with a study like this is how broad "AI" really is. MS copilot is just bing search in a different form unless you have it hooked up to your organizations data stores, collaboration platforms, productivity applications etc. and is not really helpful at all. Lots of companies I speak with are in a pilot phase of copilot which doesn't really show much value because it doesn't have access to the organizations data because it's a big security challenge. On the other hand, a chat bot inside of a specific product that is trained on that product specifically and has access to the data that it needs to return valuable answers to prompts that it can assist in writing can be pretty powerful.

load more comments (1 replies)
[–] superkret@feddit.org 12 points 3 months ago* (last edited 3 months ago)

The other 23% were replaced by AI (actually, their workload was added to that of the 77%)

load more comments
view more: next ›