freedomPusher

joined 3 years ago
MODERATOR OF
[–] freedomPusher@sopuli.xyz 1 points 5 months ago (1 children)

!isitdown@infosec.pub is a better place for this info.

[–] freedomPusher@sopuli.xyz 1 points 5 months ago* (last edited 5 months ago)

And speaking of replacement links, we need a way to add them and have the alternative links voted on, and ultimately the replacement link should potentially outrank the original link and take the spotlight. I will add this to the OP.

[–] freedomPusher@sopuli.xyz 1 points 5 months ago (1 children)

Perhaps. Simplicity is important. But what if I circumvent the exclusivity of an article by finding a mirrored copy on archive.org? If the content is quite insighful, would then want to say it is both exclusive and insightful. Those metrics together could then be used to work out whether it’s worthwhile to find a replacement source for the same content.

[–] freedomPusher@sopuli.xyz 1 points 5 months ago* (last edited 5 months ago)

Which is almost as ridiculous as acting like an admin shouldn’t be able to ban someone from their instance.

The admin did not know why they were suppressing the messages. Apparently they did not keep notes. So they reversed the action. But no one here said admins should not have the power to ban. Quite the contrary: they should. And because they should have that power, it should not be disproportionate. One million people should not lose access to civil posts from Bob because Mallory the admin did not like one of Bob’s ideas. This is why decentralisation is important.

[–] freedomPusher@sopuli.xyz 0 points 5 months ago* (last edited 5 months ago) (2 children)

Your account is 3 years old (strangely almost no posts or comments though)

Then your view is being restricted¹. I don’t know how sheltering lemmyworld admins are of their users generally but the Mastodon analog to #lemmyworld would be mastodon.social, and mastodon.social is quite loose with the censor button. There have been conversations where people only saw part of the conversation and were confused because they knew someone else was engaged but they could not see the whole conversation. After investigation, it was not a federation issue but in fact the mastodon.social admin simply decided to block a particular person. I would not be surprised if lemmyworld were blocking me in some way because anyone who outspokenly advocates for decentralisation is directly undermining lemmyworld’s position (lemmyworld is centralised both by Cloudflare and also by disproportionate userbase).

and I’ve noticed the people that were here before everyone else seems to hate the new wave of people that showed up a year ago

The fediverse was created out of love for a #decentralised free world. Centralised nodes like #LemmyWorld, #FBThreads, #shItjustWorks, #LemmyCA, #LemmyOne, etc work against the philosophy of decentralisation. Users on those nodes are either not well informed about the problems of concentrated imbalances of power, or they simply do not care and do not value digital rights; they only care about their personal reach. Fixing the bug reported here addresses the former (uninformed users).

The bug report herein is specifically designed to be an inclusive alternative to what you suggest, which is:

Why don’t you just convince your admin to defederate? Or go make you own instance for just people you agree with?

Locking people out on the crude basis of which node they come from is somewhat comparable to what Cloudflare (and lemmyworld) does by discriminating against people on the crude basis of IP reputation. It over selects and under selects at the same time if the goal is to separate good links from bad links. Anyone can post a shitty link. A majority of shitty links would come from the lemmyworld crowd, but it’s not a good criteria for a spam/ham separation. The fedi needs to improve by tagging bad links appropriately, which should not be influenced by the host the author uses.

¹(edit) you should see over 400 posts and comments. Visit https://sopuli.xyz/u/freedomPusher to see the real figures.

[–] freedomPusher@sopuli.xyz 2 points 5 months ago* (last edited 5 months ago)

Ungoogled Chromium with uMatrix running over a Tor circuit gave Yahoo’s typical blurred out paywall yesterday. Today (likely on a different Tor circuit) it gives “Too many requests -- error 999.”

[–] freedomPusher@sopuli.xyz 5 points 5 months ago* (last edited 5 months ago) (2 children)

Engadget/yahoo is fully enshitified. If you can read that article it probably means your browser is insufficiently defensive. A tl;dr bot would be useful here.

[–] freedomPusher@sopuli.xyz 0 points 5 months ago* (last edited 5 months ago)

It’s not a bug if it works as designed.

What you claim here is that software cannot have a defective design. Of course you have design defects. These are the hardest to correct.

I’d also accept “it used to do this and it doesn’t any more and not on purpose”.

This is conventional wisdom. Past behavior is no more an indication of correctness than defectiveness. GREP’s purpose was to process natural language. A line feed is not a sensible terminator in that application. For 50 years people just live with the limitation or they worked around it. Or they adapt to single token searches. It does not cease to be defect because workarounds were available.

that doesn’t make it a bug if it was never designed in to the program.

The original design was implemented on an extremely resource-poor system by today’s standards, where 64k was HUGE amount of space. It was built to function under limitations that no longer exist. I would say the design is not defective so long as your target platform is a PDP-11 from the 1970s. Otherwise the design should evolve along with the tasks and machines.

[–] freedomPusher@sopuli.xyz 1 points 5 months ago* (last edited 5 months ago) (2 children)

If all you’re advocating for is allowing grep to use some other character as a delimeter, I might be able to get behind something like bash’s $IFS or awk’s $FS variable (maybe). But I couldn’t get behind anything backwards-incompatible.

Of course. GREP has an immeasurable number of scripts dependant on it worldwide going back 50 years and it’s among Debian’s 23 essential packages:

dpkg-query -Wf '${Package;-40}${Essential}\n' | grep yes

Changing grep’s default behavior now would bring the world down. Dams would shatter. Nuclear power plants would melt down. Traffic lights would go berzerk. It would be like a Die Hard 3 “firesale”. Planes would fall out of the sky. Skynet would come online and wipe us all out. It would have to be a separate option.

TIL there are people who (try to) use grep for natural language.

The very first task grep was created for is specifically natural language input. Search “Federalist Papers grep”. There’S also a short documentary about this out in the wild somewhere but I don’t have any link handy.

Oh, and this is 100% feature/enhancement request territory. Not a bug report in any sense.

This is conventional wisdom coming from a viewpoint that simultaneously misses grep’s intended purpose.

But now that the defect has been rooted in for ~50 years, perhaps fair enough to leave grep alone. For me it depends on how lean the improvement could be. Boating grep out too much would not be favorable, but substantial replication of code between two different tools is also unfavorable. Small is good, but swiss army knives of tools also bring great value if they can be lean and internally simple.

I don’t know if you’re saying “because PDFGREP is good at handling natural language, grep should be too”

Not at all. They both have the same problem. But this same limitation in pdfgrep is a nuissance in more situations because PDFs are proportionally more likely to process natural language input.

Either way, I don’t follow how PDFGREP is relevant to discussions about grep

They have the same expression language and roughly same options. PDFGREP is most likely not much more than a grep wrapper that extracts the text from the PDF first.

[–] freedomPusher@sopuli.xyz 1 points 5 months ago (1 children)

grep isn’t really designed as a natural language search tool

My understanding of GREP history is that Ken Thompson created grep to do some textual analysis on The Federalist Papers, which to me sounds like it was designed for processing natural language. But it was on a PDP-11 which had resource constraints. Lines of text would be more uniform to manage than sentences given limited resources of the 1970s.

Thanks for the PERL code. Though I might favor sed or awk for that job. Of course that also means complicating emacs’ grep mode facility. And for PDFs I guess I’d opt for pdfgrep’s limitations over doing a text extraction on every PDF.

[–] freedomPusher@sopuli.xyz 2 points 5 months ago

I think it’s too hard for many to grasp the full consequence of surveillance via forced banking. They link cash to privacy which they then mentally reduce to “confidentiality”. It’s a lossy reduction but in the naïve brain privacy=confidentiality. They don’t realise privacy is about /control/, not just purely infosec concept of confidentiality, which then leads to the mental short-sightedness of thinking they’re dealing with “paranoia” (which is hinted in vzq’s next reply).

From there, I don’t have the answer as far as how to convey the full depth of the whole concept of privacy within the span of a post or comment that’s short enough to not be automatically ignored.

[–] freedomPusher@sopuli.xyz 2 points 5 months ago* (last edited 5 months ago)

UPDATE: it just now happened again, but this time not with the admin account (@QuentinCallaghan@sopuli.xyz) but with another user account. I was refreshing my profile and the user @baltakatei@sopuli.xyz appeared in the profile pulldown position on the page with my profile. This time I had time to take a screenshot before it changed:

It’s interesting that it shows my profile page but not as I see it. That is, when I visit my own profile page I normally have a “subscribed” sidebar. This shows what someone else would see if they visit my profile while they are logged in, which still differs from what a logged out profile looks like (as send msg options were given). So I wonder if I could have sent myself a msg.

 

Many political parties are allowing Cloudflare to block some demographics of voters from seeing election info on their own candidates. These political parties are running exclusive websites:

  • PS/Vooruit (Socialist / Parti Socialiste [fr/nl])
  • Défi (previously part of the MR, now more at the center [fr])
  • CD & V (center / Christen Democratisch en Vlaams [nl])
  • Groen (Green Party [nl])
  • Open VLD (liberal [nl])

Effectively they are operating in an anti-democratic fashion. Open and inclusive access to election info is paramount to democracy.

The political parties who are running inclusive websites are (quite ironically) the right-wing parties. And funnily enough, some of the right-wing parties actually have an English version of their website as well. This defies their historic reputation as being relatively xenophobic. If voting purely on the basis of digital rights and digital inclusion fostered by their website implementation, the right-wingers are the clear winners here.

Voting left entails supporting parties that suppress election info from some demographics of people. Voting right is a non-starter on general principle (e.g. climate denial). Voting is mandatory but there is said to be a “none of the above” option.

(edit) OTOH, the French green party (ecolo.be) has an open website. Perhaps that’s a decent way to vote.

 

The 112.be website drops all Tor traffic, which in itself is a shit show. No one should be excluded from access to emergency app info.

So this drives pro-privacy folks to visit http://web.archive.org/web/112.be/ but that just gets trapped in an endless loop of redirection.

Workaround: appending “en” breaks the loop. But that only works in this particular case. There are many redirection loops on archive.org and 112.be is just one example.

Why posted here: archive.org has their own bug tracker, but if you create an account on archive.org they will arbitrarily delete the account without notice or reason. I am not going to create a new account every time there is a new archive.org bug to report.

 

I’ve posted twice to !law@links.esq.social. The first post was 6 months ago and the other today. There is no interaction. I just thought it was a dead community, but when I directly visit this page:

https://links.esq.social/c/law?dataType=Post&page=1&sort=New

neither of my posts appear. This behavior is like being shadow banned. Does Lemmy support that? I thought not. But also I did’t post anything really controversial.. nothing that I would imagine warranting a shadow ban.

Can other people see the sopuli copy of my posts? They are:

(edit) If I visit the sopuli version of the timeline:

https://sopuli.xyz/c/law@links.esq.social

from a logged out browser, the posts are there. So indeed they must be visible to everyone but only those who visit sopuli’s version of the timeline. Not sure what’s going on here because it’s not a ghost node (links.esq.social is still online).

/cc @andrew@links.esq.social

 

Tedious to use. No way to import a list of URLs to download. Must enter files one by one by hand.

No control over when it downloads. Starts immediately when there is an internet connection. This can be costly for people on measured rate internet connections. Stop and Go buttons needed. And it should start in a stopped state.

When entering a new file to the list, the previous file shows a bogus “error” status.

Error messages are printed simply as “Error”. No information.

There is an embedded browser. What for?

What files are already present the download directory because another app put them there, GigaGet lists those files with “100%”. How does GigaGet know those files that another app put there are complete when gigaget does not even have URL for them (thus no way to check the content-length)?

 

Navi is an app in f-droid to manage downloads. It’s really tedious to use because there is no way to import a list of URLs. You either have to tap out each URL one at a time, or you have to do a lot of copy-paste from a text file. Then it forces you to choose filename for each download -- it does not default to the name of the source file.

bug 1


For a lot files it gives:

Error: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.

The /details/ page for the broken download neglects to give the error message, much less what the error means.

bug 2


Broken downloads are listed under a tab named “completed”.

bug 3


Every failed fetch generates notification clutter that cannot be cleaned up. I have a dozen or so notifications of failed downloads. Tapping the notification results in no action and the notification is never cleared.

bug 4


With autostart and auto connect both disabled, Navi takes the liberty of making download attempts as soon as there is an internet connection.

bug 5?


A web browser is apparently built-in. Does it make sense to embed a web browser inside a download manager?

 

cross-posted from: https://sopuli.xyz/post/10725880

I simply wanted to submit a bug report. This is so fucked up. The process so far:

① solved a CAPTCHA just to reach a reg. form (I have image loading disabled but the graphical CAPTCHA puzzle displayed anyway (wtf Firefox?)
② disposable email address rejected (so Bitbucket can protect themselves from spam but other people cannot? #hypocrisy)
③ tried a forwarding acct instead of disposable (accepted)
③ another CAPTCHA, this time Google reCAPTCHA. I never solve these because it violates so many digital right principles and I boycott Google. But made an exception for this experiment. The puzzle was empty because I disable images (can’t afford the bandwidth). Exceptionally, I enable images and solve the piece of shit. Could not work out if a furry cylindrical blob sitting on a sofa was a “hat”, but managed to solve enough puzzles.
④ got the green checkmark ✓
⑤ clicked “sign up”
⑥ “We are having trouble verifying reCAPTCHA for this request. Please try again. If the problem persists, try another browser/device or reach out to Atlassian Support.”

Are you fucking kidding me?! Google probably profited from my CAPTCHA work before showing me the door. Should be illegal. Really folks, a backlash of some kind is needed. I have my vision and couldn’t get registered (from Tor). Imagine a blind Tor user.. or even a blind clearnet user going through this shit. I don’t think the first CAPTCHA to reach the form even had an audio option.

Shame on #Bitbucket!

⑦ attempted to e-mail the code author:

status=bounced (host $authors_own_mx_svr said: 550-host $my_ip is listed at combined.mail.abusix.zone (127.0.0.11); 550 see https://lookup.abusix.com/search?q=$my_ip (in reply to RCPT TO command))

#A11y #enshitification

 

The linked doc is a PDF which looks very different in Adobe Acrobat than it does in evince and okular, which I believe are both based on the same GhostScript library.

So the question is, is there an alternative free PDF viewer that does not rely on the GhostScript library for rendering?

#AskFedi

 

I simply wanted to submit a bug report. This is so fucked up. The process so far:

① solved a CAPTCHA just to reach a reg. form (I have image loading disabled but the graphical CAPTCHA puzzle displayed anyway (wtf Firefox?)
② disposable email address rejected (so Bitbucket can protect themselves from spam but other people cannot? #hypocrisy)
③ tried a forwarding acct instead of disposable (accepted)
③ another CAPTCHA, this time Google reCAPTCHA. I never solve these because it violates so many digital right principles and I boycott Google. But made an exception for this experiment. The puzzle was empty because I disable images (can’t afford the bandwidth). Exceptionally, I enable images and solve the piece of shit. Could not work out if a furry cylindrical blob sitting on a sofa was a “hat”, but managed to solve enough puzzles.
④ got the green checkmark ✓
⑤ clicked “sign up”
⑥ “We are having trouble verifying reCAPTCHA for this request. Please try again. If the problem persists, try another browser/device or reach out to Atlassian Support.”

Are you fucking kidding me?! Google probably profited from my CAPTCHA work before showing me the door. Should be illegal. Really folks, a backlash of some kind is needed. I have my vision and couldn’t get registered (from Tor). Imagine a blind Tor user.. or even a blind clearnet user going through this shit. I don’t think the first CAPTCHA to reach the form even had an audio option.

Shame on #Bitbucket!

⑦ attempted to e-mail the code author:

status=bounced (host $authors_own_mx_svr said: 550-host $my_ip is listed at combined.mail.abusix.zone (127.0.0.11); 550 see https://lookup.abusix.com/search?q=%24my_ip (in reply to RCPT TO command))

#A11y #enshitification

 

If you’re logged out and reading a thread, you should be able to login in another tab and then do a forced refresh (control-shift-R); and it should show the thread with logged-in control. For some reason the cookie isn’t being passed or (perhaps more likely) the cookie is insufficient because Lemmy is using some mechanism other than cookies.

Scenario 2:

You’re logged in and reading threads in multiple tabs. Then one tab becomes spontaneously logged out after you take some action. Sometimes a hard-refresh (control-shift-R) recovers, sometimes not. It’s unpredictable. But note that the logged-in state is preserved in other tabs. So if several hard refreshes fail, I have to close the tab and use another tab to navigate to where I was in the other tab. And it seems navigation is important.. if I just copy the URL for where I was (same as opening a new tab), it’s more likely to fail.

In any case, there are no absolutes.. the behavior is chaotic and could be related to this security bug.

 

Some people think Cloudflare is not a “walled garden”. This article goes to a great extent to show not only that Cloudflare is a #walledGarden, but it’s actually more of a walled garden than the well known ones (Facebook & Google).

 

People on a tight budget are limited to capped internet connections. So we disable images in our browser settings. Some environmentalists do the same to avoid energy waste. If we need to download a web-served file (image, PDF, or anything potentially large), we run this command:

$ curl -LI "$URL"

The HTTP headers should contain a content-length field. This enables us to know before we fetch something whether we can afford it. (Like seeing a price tag before buying something)

#Cloudflare has taken over at least ~20% of the web. It fucks us over in terms of digital rights in so many ways. And apparently it also makes the web less usable to poor people in two ways:

  • Cloudflare withholds content length information
  • Cloudflare blocks people behind CGNAT, which is commonly used in impoverished communities do to limited number of IPv4 addresses.
view more: ‹ prev next ›