this post was submitted on 31 Jul 2024
317 points (100.0% liked)

TechTakes

1428 readers
281 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CarbonIceDragon@pawb.social 4 points 3 months ago (5 children)

I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.

[–] dgerard@awful.systems 25 points 3 months ago

"could" is a word meaning "doesn't"

[–] pyrex@awful.systems 15 points 3 months ago* (last edited 3 months ago) (1 children)

I do not recommend using the word "AI" as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don't distinguish between things that could actually be built and "throwing an LLM at the problem" -- you're treating their lack-of-differentiation as valid and feeding them hype.

[–] CarbonIceDragon@pawb.social 3 points 3 months ago (1 children)

I use a term I've seen used before, I'm not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.

[–] dgerard@awful.systems 4 points 3 months ago

maybe with blockchain,

[–] V0ldek@awful.systems 7 points 3 months ago

🦀 THEY DID NEUROIMAGING ON A DEAD SALMON 🦀

[–] pavnilschanda@lemmy.world 0 points 3 months ago (1 children)

As an autistic who struggles with communication and organizing thoughts, LLMs have been helping me process emotions and articulating things. Not perfectly in the way that you'd describe (hence i mostly don't use LLM outputs themselves as replies), but my situation is much better than pre-November 2022

[–] weirdwriter@tweesecake.social 7 points 3 months ago (1 children)

It is a shame LLM's weren't designed to be a common good to Disabled people though. We're just a happy use case accident for these companies and AI manufacturers. It's tricky because this could be done just as well, I figure, with specifically designed LLM's instead of generic ones. @pavnilschanda @CarbonIceDragon

[–] pavnilschanda@lemmy.world 5 points 3 months ago* (last edited 3 months ago) (1 children)

There are some efforts for LLM use for disabled people, such as GoblinTools. And you're very right about disabled people benefitting from LLMs being a happy use case accident. With that being the reality, it's frustrating how so many people who blindfully defend AI use disabled people as a shield against ethical concerns. Tech companies themselves like to use us to make themselves look good; see the "disability dongle" concept as a prime example.

[–] weirdwriter@tweesecake.social 5 points 3 months ago

Yep! Very familiar! I actually wrote about LLM's and blindness, as an example, here. https://robertkingett.com/posts/6593/ @pavnilschanda