this post was submitted on 19 Jun 2024
79 points (93.4% liked)

Privacy

32109 readers
853 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

I am working on a simple static website that gives visitors basic information about myself and the work I do. I want this as a way use to introduce myself to potential clients, collaborators, etc., rather than rely solely on LinkedIn as my visiting card.

This may seem sound rather oxymoronic given that I am literally going to be placing (some relevant) details about myself and my work on the internet, but I want to limit the websites' access from bots, web scraping and content collection for LLMs.

Is this a realistic expectation?

Also, any suggestions on privacy respecting, yet inexpensive domains that I can purchase in Europe would be of super great help.

you are viewing a single comment's thread
view the rest of the comments
[–] corroded@lemmy.world 31 points 5 months ago (2 children)

Speaking from experience, be careful you don't become over-zealous in your anti-scraping efforts.

I often buy parts and equipment from a particular online supplier. I also use custom inventory software to catalog my parts. In the past, I could use cURL to pull from their website, and my software would parse the website and save part specifications to my local database.

They have since enacted intense anti-scraping measures, to the point that cURL no longer works. I've had to resort to having the software launch Firefox to load the web page, then the software extracts the HTML from Firefox.

I doubt that their goal was to block customers from accessing data for items they purchased, but that's exactly what they did in my case. I've bought thousands of dollars of product from them in the past, but this was enough of a pain in the ass to make me consider switching to a supplier with a decent API or at least a less restrictive website.

Simple rate limiting may have been a better choice.

[–] IphtashuFitz@lemmy.world 9 points 5 months ago (1 children)

Try using “curl -A” to specify a User-Agent string that matches Chrome or Firefox.

[–] corroded@lemmy.world 2 points 5 months ago

I probably should have specified I'm using libcurl, but I did try the equivalent of what you suggested. I even tried setting a list of user agents and having it cycle through. None of them work. A lot of anti-scraping methods use much more complex schemes than just validating the user agent. In some cases, even a headless browser will be blocked.

[–] bloodfart@lemmy.ml 1 points 5 months ago