this post was submitted on 23 Apr 2024
772 points (98.7% liked)

Technology

59298 readers
5194 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] funn@lemy.lol 37 points 6 months ago (2 children)

I don't understand how Lemmy/Mastodon will handle similar problems. Spammers crafting fake accounts to give AI generated comments for promotions

[–] FeelThePower@lemmy.dbzer0.com 24 points 6 months ago (4 children)

The only thing we reasonably have is security through obscurity. We are something bigger than a forum but smaller than Reddit, in terms of active user size. If such a thing were to happen here, mods could handle it more easily probably (like when we had the spammer of the Japanese text back then), but if it were to happen on a larger scale than what we have it would be harder to deal with.

[–] BarbecueCowboy@kbin.social 15 points 6 months ago (2 children)

mods could handle it more easily probably

I kind of feel like the opposite, for a lot of instances, 'mods' are just a few guys who check in sporadically whereas larger companies can mobilize full teams in times of crisis, it might take them a bit of time to spin things up, but there are existing processes to handle it.

I think spam might be what kills this.

[–] deweydecibel@lemmy.world 7 points 6 months ago

If a community is so small that the mod team can be so inactive, there's no incentive for the company to put any effort into spamming it like you're suggesting.

And if they do end up getting a shit ton of spam in there, and it sits around for a bit until a moderator checks in, so what? They'll just clean it up and keep going.

I'm not sure why people are so worried about this. It's been possible for bad actors to overrun small communities with automated junk for a very long time, across many different platforms, some that predate Reddit. It just gets cleaned up and things keep going.

It's not like if they get some AI produced garbage into your community, it infects it like a virus that cannot be expelled.

[–] FeelThePower@lemmy.dbzer0.com 2 points 6 months ago

Hmm, good point.

[–] old_machine_breaking_apart@lemmy.dbzer0.com 11 points 6 months ago (2 children)

There's one advantage on the fediverse. We don't have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit. This alone makes using the fediverse worth for me.

When it comes to problems involving the users themselves, things aren't that different, and we don't have much to do.

[–] MinFapper@lemmy.world 23 points 6 months ago (2 children)

We don't have corporations manipulating our feeds

yet. Once we have enough users that it's worth their effort to target, the bullshit will absolutely come.

[–] old_machine_breaking_apart@lemmy.dbzer0.com 9 points 6 months ago (1 children)

they can perhaps create instances, pay malicious users, try some embrace, extend, extinguish approach or something, but they can't manipulate the code running on the instances we use, so they can't have direct power over it. Or am I missing something? I'm new to the fediverse.

[–] BarbecueCowboy@kbin.social 5 points 6 months ago (1 children)

There's very little to prevent them just pretending to be average users and very little preventing someone from just signing up a bunch of separate accounts to a bunch of separate instances.

No great automated way to tell whether someone is here legitimately.

[–] bitfucker@programming.dev 1 points 6 months ago

Yeah, and that is true for a lot of service. Sybil attack is indeed quite hard to prevent since malicious users can blend with legitimate ones.

[–] bitfucker@programming.dev 3 points 6 months ago

Federation means if you are federated then sure you get some BS. Otherwise, business as usual. Now, making sure there is no paid user or corporate bot is another matter entirely since it relies on instance moderators.

[–] deweydecibel@lemmy.world 1 points 6 months ago

We don't have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit.

Corporations aren't the only ones with incentives to do that. Reddit was very hands off for a good long while, but don't expect that same neutral mentality from fediverse admins.

[–] linearchaos@lemmy.world 10 points 6 months ago (1 children)

I think the real danger here is subtlety. What happens when somebody asks for recommendations on a printer, or complains about their printer being bad, and all of a sudden some long established account recommends a product they've been happy with for years. And it turns out it's just an AI bot shilling for brother.

[–] deweydecibel@lemmy.world 4 points 6 months ago (1 children)

For one, well established brands have less incentives to engage in this.

Second, in this example, the account in question being a "long established user" would seem to indicate you think these spam companies are going to be playing a long game. They won't. That's too much effort and too expensive. They will do all of this on the cheap, and it will be very obvious.

This is not some sophisticated infiltration operation with cutting edge AI. This is just auto generated spam in a new upgraded form. We will learn to catch it, like we've learned to catch it before.

[–] linearchaos@lemmy.world 2 points 6 months ago

I mean, it doesn't have to be expensive. And also doesn't have to be particularly cutting edge. Start throwing some credits into an LLM API, haven't randomly read and help people out in different groups. Once it reaches some amount of reputation have it quietly shill for them. Pull out posts that contain keywords. Have the AI consume the posts and figure out if they have to do with what they sound like they do. Have it subtly do product placement. None of this is particularly difficult or groundbreaking. But it could help shape our buying habits.

[–] roguetrick@lemmy.world 4 points 6 months ago

Mostly it seems to be handled here with that URL blacklist automod.

[–] deweydecibel@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

The same way it's handled on Reddit: moderators.

Some will get through and sit for a few days but eventually the account will make itself obvious and get removed.

It's not exactly difficult to spot these things. If an account is spending the majority of its existence on a social media site talking about products, even if they add some AI generated bullshit here and there to make it seem like it's a regular person, it's still pretty obvious.

If the account seems to show up pretty regularly in threads to suggest the same things, there's an indicator right there.

Hell, you can effectively bait them by making a post asking for suggestions on things.

They also just tend to have pretty predictable styles of speak, and never fail to post the URL with their suggestion.