Write a scraper using python and selenium or something. You may have to manually log in as part of it
datahoarder
Who are we?
We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.
We are one. We are legion. And we're trying really hard not to forget.
-- 5-4-3-2-1-bang from this thread
which of the posts didn’t work on archive.org wayback machine? I tried your post “How Can You Clean the Air” and it worked, though took a bit of time due to a couple of redirecting
Oh weird whenever i tried using it on of his posts it wouldnt archive. Its not my blog
Good find on the solution, there's some good alternatives from the github Monolith as well. However, MoW looks great, too bad it's Chrome only from what I can see :(
Yeha i had to get on chrome just for this :/ hopefully someone forks it into a firefox addon
Does this actually modify the files when monolith embeds everything into one file?
Maybe some alternative frontend and then the regular methods like wget?
Pose this question to chat gpt 3.5 or 4. Ask it to assist in making a (python?) script to do this. Feed it errors, and you can get there pretty quickly and learn along the way.
Lol, downvoted for...?