this post was submitted on 13 Sep 2024
21 points (64.8% liked)

Technology

59298 readers
4871 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 19 comments
sorted by: hot top controversial new old
[–] SkybreakerEngineer@lemmy.world 20 points 2 months ago (1 children)

Given that Alex Jones has "interviewed" ChatGPT on air twice now, I'm going to say no.

[–] Telorand@reddthat.com 1 points 2 months ago

I mean, Alex Jones has more skin in the grift than most conspiracy theorists, so he's not likely to do a 180 quickly, if at all. Also, it seems like he's been drunk more often on the latest episodes, so maybe he's having an existential crisis started by being fact-checked in real time by a robot.

We can't know what his internal state is, but I do agree that it does not seem to have slowed his pace at all on the surface.

[–] nullboi@lemmy.world 6 points 2 months ago

The amount of conspiracy theories I've heard in the past year or so involve AI in some way.

Yesterday a friend and I were talking and he said the government was using AI to hack his brain.

I don't think a chat bot is going to help that situation.

[–] Fizz@lemmy.nz 6 points 2 months ago

No ai can't because no one believes a word they say. There are so many guardrails put in place that speaking to ai chatbots feels like talking to corporate HR

[–] CondensedPossum@lemmy.world 5 points 2 months ago* (last edited 2 months ago) (1 children)

Pretty funny to posit that a LLM chatbot ought to talk us out of conspiratorial thinking while running on a corporate GPU farm absolutely BLASTING through electricity and copyright and IP violations because it's legally convenient for the powerful. Please post more thought provoking unreasonable propaganda.

[–] Deceptichum@quokk.au 19 points 2 months ago* (last edited 2 months ago) (2 children)

Huh that’s funny, because I run a local LLM even on my laptop.

And fuck yes, I love IP violations. Makes me want to go pirate some media and draw fan art.

Please post some more ignorant rage.

[–] Womble@lemmy.world 7 points 2 months ago* (last edited 2 months ago)

Its wild how some people's blind hate of gen AI has got them thinking "corporate control of culture is good actually"

[–] WereCat@lemmy.world 1 points 2 months ago (1 children)

Have you trained that LLM?

[–] Deceptichum@quokk.au 1 points 2 months ago (1 children)
[–] WereCat@lemmy.world 0 points 2 months ago (1 children)

Because if you did not then it doesn't matter if you run it locally

[–] Deceptichum@quokk.au 4 points 2 months ago (1 children)

Uh yes it does.

I’ve let the corporations spend the time, money, and resources to train a model.

They get zero benefit when I run it locally. I get all the benefit.

[–] WereCat@lemmy.world 0 points 2 months ago

The point I'm trying to make is to your first response to CondensedPossum being that you're still ruining a corporate LLM with bias.

[–] yesman@lemmy.world 4 points 2 months ago

If the AI wanted to talk me out of conspiracy theories, why don't they use the brain signals to control us to thinking that way? Do the microwaves from the circuits behind the walls all go out of service all of a sudden?

This is just classic silicon valley trying to "innovate", when their real plan was to muscle out CIA and FBI work to non-union contractors.

[–] tal@lemmy.today 3 points 2 months ago* (last edited 2 months ago)

I guess this is all part of the social sciences side of chatbots and something to keep an eye on, and folks have to start somewhere...but I kind of feel that the technology isn't really at the point where teaching people in general with a chatbot is an ideal solution.

[–] nyan@lemmy.cafe 2 points 2 months ago* (last edited 2 months ago)

AI is a conspiracy theory—companies are just hiring people in lower-income countries to impersonate machines!

(/s, of course, but with just enough truth to it that there's probably someone somewhere out there who thinks the above statement is plausible.)

[–] dsilverz@thelemmy.club 1 points 2 months ago* (last edited 2 months ago)

Interestingly enough, there's an AI experimentation focused on (trying to) debunking conspiracy theories. The article was posted here on !technology@lemmy.world

Edit: the "Can AI talk us out of conspiracy theory rabbit holes?" article's cover is misleadingly trying to relate conspiracy theories with occult, pagan and esoteric concepts, with symbols that you find in esoteric field (such as the eyed hand, alchemy symbols for planets and stars, etc). I'm a pagan myself. Religious intolerance is a thing that harms minority religions and the article sadly helps to spread this intolerance.

The occult, pagan and esoteric has nothing to do with conspiracy theories, they're belief systems, they're religions, they're spiritual practices and views. Religions such as Luciferianism and Wicca are often attacked by Christians (with moralist speech such as "you worship Satan, you worship demons, you're evil, repent"; let's not forget what the church did to "witches" some centuries ago). I'm not attacking Christianity here (I was a Christian once), but it's a reality: pagan beliefs, such as mine (I'm somewhat Luciferian and Thelemite in a syncretic way), are often attacked, and such a scientific article does harm pagan beliefs. Pagans don't spread conspiracy theories.

[–] captainlezbian@lemmy.world 1 points 2 months ago

Probably not given our loved ones often can’t

[–] Ilovethebomb@lemm.ee 0 points 2 months ago

This is the first time in a long time I've heard of a use case for AI that is genuinely useful

It's a job very few people will want to do, it can do the job as well as, if not better than a human, and it's a use case that is genuinely useful.

I wish them luck.