this post was submitted on 14 Jun 2023
77 points (100.0% liked)
Lemmy.World Announcements
29048 readers
5 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world/
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to info@lemmy.world e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email report@lemmy.world (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's the thing though - what system? Reddit, YouTube, Twitter, Facebook, you name it, nobody managed to prevent bots. How would Lemmy be more successful at this? It's an extremely challenging battle, unfortunately.
Do those for-profit social media companies want to drive-down traffic that makes them seem more valuable to advertisers? I get that it's still insanely difficult, and we can't actually implement a captcha on every up-vote, but it seems like there's a conflict of interest between moderators and site owners when it comes to bot activity.
Arguably, some of the platforms I mentioned have even more of an interest on preventing bots. If I want to place ads on your website, but you can't tell me if out of 100 impressions 10 are bots or 90 are bots... I'm not wasting my money, or at the very least, I'll expect rates significantly lower than other competitors.
I don't know. Wouldn't their motivation be to know exactly how many bots there are (so they could disclose the number if/when asked) but continue to let them proliferate?
Social media companies generally benefit from high traffic for advertiser appeal, but combating bots is crucial for maintaining user trust and engagement. Implementing CAPTCHAs for every upvote may not be feasible, but addressing bot activity is generally in the long-term interest of social media companies.
This message was generated by ChatGPT.
Not sure if you bought that, but if I was applying for an account on Beehaw using a LLM assistant, I bet the odds of passing a human review is better than 50%.
Oh god. Could you imagine doing a captcha every time you upvoted? Please DO NOT do this, Ernest.
Well, what about the system I mentioned? Just have the up and down arrows be little bot detection boxes. My understanding is that all those "I am not a robot" check boxes detect mouse speed, precise click locations, hesitation times, etc. and do a quick calculation on the odds that your clicking behavior was human or robot. I'm probably underestimating what it takes to implement that but on the user side it's just a click just like any other click.