this post was submitted on 25 Aug 2023
38 points (100.0% liked)

cybersecurity

3249 readers
4 users here now

An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!

Community Rules

Enjoy!

founded 1 year ago
MODERATORS
 

During the last two days it seems we have been "bombarded" with advertisement bots.

I found it curious, the advertisements are correctly targeted to sysadmins and security professionals. Meanwhile they have somewhat believable biographies (even if they are a little on the nose), suggesting hand crafted accounts.

Something they all have in common is their instance (discuss.tchncs.de) and that they have a "bachelors degree in computer science".

This is not the first time I've seen adbots on Lemmy, but it's the first time I've seen them on infosec.

Does anyone have any insight into the world of adbots they could share? I find myself increasingly curious in what goes on behind the curtains.

you are viewing a single comment's thread
view the rest of the comments
[–] jet@hackertalks.com 16 points 1 year ago (3 children)

I can't speak specifically to the infosec bots, but I suspect it has something to do with all of the Lemmy instances mirroring every post. It could add a lot of weight to SEO for a various websites. So if they can get a post that doesn't get deleted, that's SEO fodder

[–] Deebster@lemmyrs.org 9 points 1 year ago* (last edited 1 year ago) (2 children)

Seems like Lemmy should add a rel=canonical link when browsing federated communities - this would “solve“ this issue (and would be the correct thing to do anyway).

[–] jonne@infosec.pub 2 points 1 year ago (2 children)

I believe Lemmy instances disallow crawling by default, so SEO is probably not why. Would be nice to find Lemmy results in Google if they can sort out the canonical URL problem. Reddit was a great resource for random questions, and if people move here it should still be easy to find.

[–] ptz@dubvee.org 5 points 1 year ago

Nope, it's allowed.

The default robots.txt disallows access to a few paths but not /post or /comment.

There are lots of crawler bots hitting my instance (ByteSpider being the most aggressive). I just have a list of User Agent regexes I use to block them via Nginx. Some, like Semrush, have IP ranges I can block completely at the firewall (in addition to the UA filters)

[–] Deebster@lemmyrs.org 3 points 1 year ago

What makes you say that? robot.txt just disallows things like /create_community and there's no robots, googlebot, etc meta tags in the source that I can see, and no nofollow apart from on a few things like feeds.

Also, I'm sure I've seen Lemmy appearing in search results already.

[–] StudioLE@programming.dev 0 points 1 year ago (1 children)
[–] Deebster@lemmyrs.org 4 points 1 year ago* (last edited 1 year ago)

No, I was referring to the bit about having lots of copies of the same content on each different instance. If example.com/c/comm@* had a meta tag giving the origin community as the rel=canonical link target then only the origin would be in a search engine as the only linker.

rel=nofollow is a good idea too, but less interesting to this semantic html nerd.

[–] Zeth0s@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

Also, one can create a personal instance of lemmy without users, create a bot to subscribe to many communities and they'd end up with a whole database to simply create personalized recommenders targeted to every single user.

Don't know if they are doing it now, but it should be pretty easy. One has everything, subscriptions, upvotes, all comments, all nicely served in a convenient relational db format

[–] bulwark@infosec.pub 2 points 1 year ago

The SEO-angle is interesting, thank you for the insight!