this post was submitted on 26 Aug 2024
17 points (100.0% liked)

TechTakes

1490 readers
31 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] froztbyte@awful.systems 7 points 3 months ago (1 children)

have you ever run into the term “learned helplessness”? it may provide some interesting reading material for you

(just because samai and friends all pinky promise that this is totally 170% the future doesn’t actually mean they’re right. this is trivially argued too: their shit has consistently failed to deliver on promises for years, and has demonstrated no viable path to reaching that delivery. thus: their promises are as worthless as the flashy demos)

[–] ovid@fosstodon.org -4 points 3 months ago (3 children)

@froztbyte Given that I am currently working with GenAI every day and have been for a while, I'm going to have to disagree with you about "failed to deliver on promises" and "worthless."

There are definitely serious problems with GenAI, but actually being useful isn't one of them.

[–] dgerard@awful.systems 8 points 3 months ago (1 children)

for those who can't be bothered tracing down the thread, Curtis' slam dunk example of GenAI usefulness turns out to be a searchish engine

[–] froztbyte@awful.systems 5 points 3 months ago

god I just read that comment (been busy with other stuff this morning after my last post)

I .... I think I sprained my eyes

[–] zogwarg@awful.systems 5 points 3 months ago* (last edited 3 months ago) (1 children)

There are definitely serious problems with GenAI, but actually being useful isn’t one of them.

You know what? I'd have to agree, actually being useful isn't one of the problems of GenAI. Not being useful very well might be.

[–] ovid@fosstodon.org -4 points 3 months ago (1 children)

@zogwarg OK, my grammar may have been awkward, but you know what I meant.

Meanwhile, those of us working with AI and providing real value will continue to do so.

I wish people would start focusing on the REAL problems with AI and not keep pretending it's just a Markov Chain on steroids.

[–] zogwarg@awful.systems 8 points 3 months ago* (last edited 3 months ago) (1 children)

On a less sneerious note, I would draw distinctions between:

  • Being able to extract value from LLM/GenAI
  • LLM/GenAI being able to sustainably produce value (without simple theft, and without cheaper alternatives being available)

And so far i've really not been convinced of the latter.

[–] ovid@fosstodon.org -2 points 3 months ago (1 children)

@zogwarg

Consider traditional databases which let you search for strings. Vector databases let you search the meaning.

For one client, someone could search for "videos about cats". With stemming and stop words, that becomes "cat" and the results might be lists of videos about house cats and maybe the unix "cat" command. Tigers, lions, cheetahs? Nope.

Vector database will return tigers/lions/cheetahs because it "knows" they are cats. A much smarter search. I've built that for a client.

[–] froztbyte@awful.systems 4 points 3 months ago (1 children)

(sub: apologies for non-sneer but I’m curious)

tbh I suspect I know exactly what you reference[0] and there is an extended conversation to be had about that

it doesn’t in any manner eliminate the foundational problems in specificity that many of these have, they still have the massive externalities problem in operation (cost/environmental transfer), and their foundational function still relies on having stripmined the commons and making their operation from that act without attribution

I don’t believe that one can make use of these without acknowledging this. do you agree? and in either case whether you do or don’t, what is the reason for your position?

(separately from this, the promises I handwaved to are the varieties of misrepresentation and lies from openai/google/anthropic/etc. they’re plural, and there’s no reasonable basis to deny any of them, nor to discount their impact)

[0] - as in I think I’ve seen the toots, and have wanted to have that conversation with $person. hard to do out of left field without being a replyguy fuckwit

[–] ovid@fosstodon.org -2 points 3 months ago (1 children)

@froztbyte Yeah, having in-depth discussions are hard with Mastodon. I keep wanting to write a long post about this topic. For me, the big issues are environmental, bias, and ethics.

Transparency is different. I see it in two categories: how it made its decisions and where it got its data. Both are hard problems and I don't want to deny them. I just like to push back on the idea that AI is not providing value. 😃