this post was submitted on 22 Dec 2024
521 points (96.1% liked)

Technology

60103 readers
2025 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] NigelFrobisher@aussie.zone 41 points 3 days ago (6 children)

At a beach restaurant the other night I kept hearing a loud American voice cut across all conversation, going on and on about “AI” and how it would get into all human “workflows” (new buzzword?). His confidence and loudness was only matched by his obvious lack of understanding of how LLMs actually work.

[–] ikidd@lemmy.world 34 points 3 days ago (3 children)

"Confidently incorrect" I think describes a lot of AI aficionados.

[–] wewbull@feddit.uk 15 points 3 days ago

And LLMs themselves.

[–] ameancow@lemmy.world 9 points 3 days ago* (last edited 3 days ago)

I would also add "hopeful delusionals" and "unhinged cultist" to that list of labels.

Seriously, we have people right now making their plans for what they're going to do with their lives once Artificial Super Intelligence emerges and changes the entire world to some kind of post-scarcity, Star-Trek world where literally everyone is wealthy and nobody has to work. They think this is only several years away. Not a tiny number either, and they exist on a broad spectrum.

Our species is so desperate for help from beyond, a savior that will change the current status-quo. We've been making fantasies and stories to indulge this desire for millenia and this is just the latest incarnation.

No company on Earth is going to develop any kind of machine or tool that will destabilize the economic markets of our capitalist world. A LOT has to change before anyone will even dream of upending centuries of wealth-building.

load more comments (1 replies)
[–] ChaoticEntropy@feddit.uk 16 points 3 days ago (3 children)

Some people can only hear "AI means I can pay people less/get rid of them entirely" and stop listening.

load more comments (3 replies)
[–] Blackmist@feddit.uk 9 points 3 days ago (3 children)

I've noticed that the people most vocal about wanting to use AI get very coy when you ask them what it should actually do.

[–] ameancow@lemmy.world 5 points 3 days ago* (last edited 3 days ago) (2 children)

I also notice the ONLY people who can offer firsthand reports how it's actually useful in any way are in a very, very narrow niche.

Basically, if you're not a programmer, and even then a very select set of programmers, then your life is completely unimpacted by generative AI broadly. (Not counting the millions of students who used it to write papers for them.)

AI is currently one of those solutions in search of a problem. In its current state, it can't really do anything useful broadly. It can make your written work sound more professional and at the same time, more mediocre. It can generate very convincing pictures if you invest enough time into trying to decode the best sequence of prompts and literally just get lucky, but it's far too inacurate and inconsistent to generate say, a fully illustrated comic book or cartoon, unless you already have a lot of talent in that field. I have tried many times to use AI in my current job to analyze PDF documents and spreadsheets and it's still completely unable to do work that requires mathematics as well as contextual understanding of what that math represents.

You can have really fun or cool conversations with it, but it's not exactly captivating. It is also wildly inaccurate for daily use. I ask it for help finding songs by describing the lyrics and other clues, and it confidentially points me to non-existing albums by hallucinated artists.

I have no doubt in time it's going to radically change our world, but that time frame is going to require a LOT more time and baking before it's done. Despite how excited a few select people are, nothing is changing overnight. We're going to have a century-long "singularity" and won't realize we've been through it until it's done. As history tends to go.

load more comments (2 replies)
load more comments (2 replies)
load more comments (3 replies)
[–] LenielJerron@lemmy.world 136 points 4 days ago* (last edited 4 days ago) (3 children)

A big issue that a lot of these tech companies seem to have is that they don't understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

I do not understand all the hype around AI. I can understand the danger; people who don't see that it's bad are using it in place of people who know how to do things. But in my teaching for example I've never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can't imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it's actually impressive technology - we've had computers that can do advanced math highly accurately for a while, but we've finally developed one that's worse at math than the average undergrad in a gen-ed class!)

[–] Voroxpete@sh.itjust.works 60 points 4 days ago (25 children)

The answer is that it's all about "growth". The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

To make share price go up like that, you have to do one of two things; show that you're bringing in new customers, or show that you can make your existing customers pay more.

For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.

And that's what "AI" looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That's why they're shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you're "engaging" with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show "engagement", "interest", which they can then use to promise that down the line huge piles of money will fall out of this pinata.

The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren't just them setting huge piles of money on fire.

load more comments (25 replies)
load more comments (2 replies)
[–] 2pt_perversion@lemmy.world 76 points 4 days ago* (last edited 4 days ago) (24 children)

There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.

But generative AIs are really good at tasks I wouldn't have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity's current workflow. It's not just hype.

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn't really fit, like AI enabled fridges and toasters.

[–] buddascrayon@lemmy.world 70 points 4 days ago (3 children)

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn't really fit, like AI enabled fridges and toasters.

This is literally the hype. This is the hype that is dying and needs to die. Because generative AI is a tool with fairly specific uses. But it is being marketed by literally everyone who has it as General AI that can "DO ALL THE THINGS!" which it's not and never will be.

load more comments (3 replies)
[–] sudneo@lemm.ee 39 points 4 days ago (37 children)

Even if they plateaued in place where they are right now it would lead to major shakeups in humanity's current workflow

Like which one? Because it's now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

load more comments (37 replies)
[–] andallthat@lemmy.world 10 points 3 days ago* (last edited 3 days ago)

Goldman Sachs, quote from the article:

“AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.”

Generative AI can indeed do impressive things from a technical standpoint, but not enough revenue has been generated so far to offset the enormous costs. Like for other technologies, It might just take time (remember how many billions Amazon burned before turning into a cash-generating machine? And Uber has also just started turning some profit) + a great deal of enshittification once more people and companies are dependent. Or it might just be a bubble.

As humans we're not great at predicting these things including of course me. My personal prediction? A few companies will make money, especially the ones that start selling AI as a service at increasingly high costs, many others will fail and both AI enthusiasts and detractors will claim they were right all along.

[–] Eldritch@lemmy.world 23 points 4 days ago (37 children)

Computers have always been good at pattern recognition. This isn't new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

The same goes for code completion. They will just generate something that fills the pattern they're told to look for. It doesn't matter if it's right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we've had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can't even get an llm to reliably spit out a hello world program.

load more comments (37 replies)
[–] ssfckdt@lemmy.blahaj.zone 17 points 4 days ago

This is easy to say about the output of AIs.... if you don't check their work.

Alas, checking for accuracy these days seems to be considered old fogey stuff.

load more comments (19 replies)
[–] nroth@lemmy.world 58 points 4 days ago (8 children)

"Built to do my art and writing so I can do my laundry and dishes" -- Embodied agents is where the real value is. The chatbots are just fancy tech demos that folks started selling because people were buying.

[–] bradd@lemmy.world 18 points 4 days ago (7 children)

Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.

load more comments (7 replies)
load more comments (7 replies)
[–] ssfckdt@lemmy.blahaj.zone 29 points 4 days ago (5 children)

So you're saying we wont have any crowdsourced blockchain Web 2.0 AIs?

[–] razm@sh.itjust.works 12 points 3 days ago

Quantum! don't forget quantum, you filthy peasant.

load more comments (4 replies)
[–] osugi_sakae@midwest.social 5 points 3 days ago

Education is one area where GenAI is having a huge impact. Teachers work with text and language all day long. They have too much to do and not enough time to do it. Ideally, for example, they should "differentiate" for EACH and EVERY student. Of course that almost never happens, but second best is to differentiate for specific groups - students with IEPs (special ed), English Learners, maybe advanced / gifted.

More tech aware teachers are now using ChatGPT and friends to help them do this. They are (usually) subject area experts, so they can quickly read through a generated or modified text and fix or remove errors - hallucinations are less (ime) of an issue in this situation. Now, instead of one reading that only a few students can actually understand, they have three at different levels, each with their own DOK questions.

People have started saying "AI won't replace teachers. Teachers who use AI will replace teachers who don't."

Of course, it will be interesting to see what happens when VC funding dries up, and the AI companies can't afford to lose money on every single interaction. Like with everything else in USA education, better off districts may be able to afford AI, and less-well-off (aka black / brown / poor) districts may not be able to.

[–] einlander@lemmy.world 32 points 4 days ago (5 children)
load more comments (5 replies)
load more comments
view more: next ›