this post was submitted on 20 Jul 2023
357 points (97.6% liked)

Programmer Humor

32483 readers
369 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
top 20 comments
sorted by: hot top controversial new old
[–] tiny_electron@sh.itjust.works 48 points 1 year ago (2 children)

It is like compression... but backwards ^^

[–] unagi@feddit.nl 36 points 1 year ago
[–] yogthos@lemmy.ml 15 points 1 year ago (2 children)

now that you mention it...

the real question is how long before we just have automated agents sending corporate emails to one another without any human in the loop 🤣

[–] hglman@lemmy.ml 14 points 1 year ago

Its 100% already happening.

[–] Anticorp@lemmy.ml 6 points 1 year ago

Minus 10 years.

[–] tourist@community.destinovate.com 25 points 1 year ago (2 children)

Most corporate communications are unnecessarily fluffy to begin with because it makes it look like more work was done. Most of the time I don't even understand why I'm explaining something and it feels like the only requirement is to have words on a page.

[–] xantoxis@lemmy.one 12 points 1 year ago (1 children)

Sometimes the only requirement IS to have words on a page. Think about a disaster recovery plan, for example. Now, you probably don't want an LLM to write your disaster recovery plan, but it's a perfect example of something where the main value is that you wrote it down, and now you can be certified that you have one.

[–] tourist@community.destinovate.com 7 points 1 year ago (1 children)

I just asked GPT to create a disaster recovery plan for a ransomware attack, and actually the information it gave wasn't wrong or bad. But it's also very generic, and it will rarely/never tell you correctly the specifics to your applications or where to click.

[–] xantoxis@lemmy.one 3 points 1 year ago

Right. Again, though, I don't recommend having an LLM do that particular chore for you.

[–] yogthos@lemmy.ml 1 points 1 year ago

there's a whole book on the subject of bullshit jobs incidentally https://en.wikipedia.org/wiki/Bullshit_Jobs

[–] adroidBalloon@lemmy.ml 10 points 1 year ago (1 children)

beware! soon, it will be able to turn that long email into a meeting!

[–] Llewellyn@lemmy.ml 3 points 1 year ago (1 children)

And another GPT will participate in it for me. Good.

[–] NakariLexfortaine@lemm.ee 5 points 1 year ago

"Didja hear, Jeff had a heart attack."

"Wait... Jeff was a real person this entire time?"

[–] redcalcium@c.calciumlabs.com 6 points 1 year ago

Something is wrong, why do AIs get to spend all their time writing and painting while we have to go to work every day?

[–] xantoxis@lemmy.one 6 points 1 year ago (1 children)

This is a legitimate use case for LLM, though.

Not everyone can communicate clearly. Not everyone can summarize well. So the panel on the right is great for the people on the other end, who must read your poorly-communicated thoughts.

At the same time, some things must look like you put careful thought and time into your words. Hence, the panel on the left.

And if people on both sides are using the tool to do this, who's really hurt by that?

[–] hglman@lemmy.ml 9 points 1 year ago (1 children)

Yes, but there is a real risk here that either the expansion added false details or the summary is wrong, especially the summary.

[–] xantoxis@lemmy.one 1 points 1 year ago (1 children)

I don't disagree, but most business emails aren't quite that strict.

[–] hglman@lemmy.ml 2 points 1 year ago

It's not about formality. It is about the introduction of error. Less strict communication is more likely to have such errors introduced.

[–] klangcola@reddthat.com 3 points 1 year ago (1 children)

The AI arms race has begun!

Isn't this kinda thing happening already in the recruitment industry?

[–] yogthos@lemmy.ml 1 points 1 year ago

pretty sure stuff like resume screening is done using machine learning nowadays