this post was submitted on 25 Nov 2023
620 points (99.8% liked)

196

16449 readers
1973 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
top 23 comments
sorted by: hot top controversial new old
[–] AVincentInSpace@pawb.social 67 points 11 months ago (2 children)

Seems to me it'd be pretty easy to tell. If the footage was AI generated, fingers would be appearing and disappearing.

[–] Pregnenolone@lemmy.world 32 points 11 months ago (1 children)

It’s a solved problem now. Most good AI models generate correct fingers these days.

[–] riskable@programming.dev 21 points 11 months ago (1 children)

Well, no, actually: The AI image model will generate bad fingers constantly it's just become easier to fix via a secondary step (e.g. img2img) or you just tell it to generate 50 images and just pick the ones that don't have messed up fingers 🤷

[–] harmsy@lemmy.world 3 points 11 months ago

Cries in Artbreeder credits.

[–] Waluigis_Talking_Buttplug@lemmy.world 12 points 11 months ago (1 children)

I mean, probably not in 3-5 years let alone 10.

[–] AVincentInSpace@pawb.social 8 points 11 months ago

And by that time the neural nets will have figured out anatomically correct hands anyway making this product doubly moot

[–] Diabolo96@lemmy.dbzer0.com 55 points 11 months ago* (last edited 11 months ago) (2 children)

It's scary that the speed of improvement in the AI sector is so fast that people are still talking about something that wasn't a problem one month after it's launch if the person sharing the picture spent a bit more time than just writing the prompt and tapping "GENERATE". not only it wasn't a problem even back then, you could even choose the pose of the character and all other sorts of parameters. Since several months you can just do the bare minimum and it output the correct number of fingers.

People are underestimating AI improvement rate by a lot and big tech's gonna abuse it.

[–] kautau@lemmy.world 41 points 11 months ago* (last edited 11 months ago) (2 children)

Big tech proved in 48 hours with the OpenAI fiasco that, as with every other industry, ethics are gone and money wins in today’s hyper-capitalist system. Whatever promise AI ever held for being used for good is now vastly overshadowed by its likelihood to be used to increase quarterly profits for the highest bidder, along with whatever side effects that entails.

[–] Even_Adder@lemmy.dbzer0.com 19 points 11 months ago* (last edited 11 months ago)

Luckily, AI is a public technology. That's why they're already trying their hand at regulatory capture. And they might just get it. Just like they're trying to destroy encryption. Support open source development, It's our only chance. Their AI will never work for us. John Carmack put it best.

[–] Diabolo96@lemmy.dbzer0.com 3 points 11 months ago

AM and the other AIs from the short story "I have no mouth and I must scream" could be a reality. The deep hatred it has towards humans was never explained and could be an alignment problem. They're AGIs made to wage wars after all.

I really recommend robert miles videos. He's been uploading videos about AI research safety for 6 years, when the most powerful AI were in millions of parameters and vastly under trained.

https://youtu.be/bJLcIBixGj8

[–] riskable@programming.dev 13 points 11 months ago (1 children)

big tech's gonna abuse it.

Actually, it's everyone that's going to abuse it. Big tech wants to be the exclusive "AI provider" for everyday people's AI needs and desires but the reality is that the tech isn't that easy to keep secret/proprietary because most of the innovations pushing AI forward come from individuals fooling around with the technology and academia. Not from big tech R&D (which lately seems to all be spent trying to improve business processes).

Big tech is spending billions on hardware and entire data centers just to do AI stuff with the expectation that it'll give them a competitive advantage but the truth is that it'll be the small companies and individuals that end up taking advantage of AI in ways that actually improve things for everyday people and/or make real money.

My guess is that they're betting on acquisitions of companies using their AI processing power 🤷. Either that or it's just wishful thinking.

[–] HiddenLayer5@lemmy.ml 6 points 11 months ago* (last edited 11 months ago)

AI scams are already rampant where they pretend to be a loved one asking for help (read: "I'm in a bad situation right now can you send me as much money as you can?") And unsurprisingly it's unreasonably effective especially on older people.

Just a reminder that the tech companies absolutely do not see the above as an issue BTW, in fact all they seem to do is tacitly endorse it by advertising that you can use their service to clone people and "bring them to life" virtually and stuff. Because they're still making money when you use the AI (not to mention they collect and retain the training data you give them, with or without the subject's consent) and it's not like it's that easy for investigators to tell which AI was responsible for a particular scam campaign so there's really no risk to their reputation at all.

I'm serious when I say this: If you have elderly or otherwise less tech inclined family members and especially if you have them and your voice and/or photos are publicly available online, set up some kind of password that you have to get right before they send you money, absolutely no exceptions no matter how distressed "you" look or sound. It can be as simple as a word or phrase, or pick a specific shared memory that people outside your family don't know about that you'll always mention before asking for money. Do this in advance and tell them that AI can now convincingly replicate human speech and even photos and videos, and that if "you" don't know the password then they should hang up/block the account immediately and not respond further. You might even want to practice with them if they might forget. The vast majority of these types of scammers are just scraping the internet for information and have no idea who either of you are, so even a simple check like this should be able to significantly reduce the risk of scams.

[–] FrankTheHealer@lemmy.world 39 points 11 months ago (1 children)

Modern problems require modern solutions

[–] Agent641@lemmy.world 19 points 11 months ago

Dystopian problems require dystopian solutions

[–] uriel238@lemmy.blahaj.zone 20 points 11 months ago

The fingers wouldn't work, as they'd move in real action like fake fingers rather than blending in and out like AI blended footage.

This reminds me of the product image (it was never a real product) of a gun that disguises itself as a cell phone. It was never a real product, but US Law Enforcement uses it to justify shooting people brandishing a cell phone.

[–] leaky_shower_thought@feddit.nl 11 points 11 months ago

criminals got to crimi

[–] andrew_bidlaw@sh.itjust.works 8 points 11 months ago (1 children)
[–] FlyingSquid@lemmy.world 3 points 11 months ago (1 children)

Saw it years ago. Surprisingly boring.

[–] andrew_bidlaw@sh.itjust.works 2 points 11 months ago (1 children)

One needs a talent to turn one stupid joke into something special, compelling. They laked it. At least, they were dedicated to make a material for meme cuts. And I feel they themselves had fun filming it (:

[–] FlyingSquid@lemmy.world 2 points 11 months ago

True, but it wasn't even good porn. And that doesn't take a huge amount of talent.

[–] akd@lemm.ee 5 points 11 months ago

IANAL but this seems stupid to try in court assuming the footage is under good chain of custody.

[–] UnkTheUnk@midwest.social 5 points 11 months ago

truth is dead

[–] selokichtli@lemmy.ml 5 points 11 months ago

You mean politicians.