this post was submitted on 26 Jul 2023
323 points (100.0% liked)

Technology

37800 readers
418 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Hey everyone, thank you for your patience, and thank you to everyone who engaged constructively. It is clear based on the feedback we’ve received that a bigger discussion needs to take place, and I’m not sure my personal repository is the best place to do that - we are looking for a better forum and will update when we have found one. We want to continue the discussion and collaborate to address your core concerns in an improved explainer.

I want to be transparent about the perceived silence from my end. In the W3C process it is common for individuals to put forth early proposals for new web standards, and host them in a team member's personal repository while pursuing adoption within a standards body. My first impulse was to jump in with more information as soon as possible - but our team wanted to take in all the feedback, and be thorough in our response.

That being said, I did want to take a moment to clarify the problems our team is trying to solve that exist on the web today and point out key details of this early stage proposal that may have been missed.

WEI’s goal is to make the web more private and safe The WEI experiment is part of a larger goal to keep the web safe and open while discouraging cross-site tracking and lessening the reliance on fingerprinting for combating fraud and abuse. Fraud detection and mitigation techniques often rely heavily on analyzing unique client behavior over time for anomalies, which involves large collection of client data from both human users and suspected automated clients.

Privacy features like user-agent reduction, IP reduction, preventing cross-site storage, and fingerprint randomization make it more difficult to distinguish or reidentify individual clients, which is great for privacy, but makes fighting fraud more difficult. This matters to users because making the web more private without providing new APIs to developers could lead to websites adding more:

sign-in gates to access basic content invasive user fingerprinting, which is less transparent to users and more difficult to control excessive challenges (SMS verification, captchas) All of these options are detrimental to a user’s web browsing experience, either by increasing browsing friction or significantly reducing privacy.

We believe this is a tough problem to solve, but a very important one that we will continue to work on. We will continue to design, discuss, and debate in public.

WEI is not designed to single out browsers or extensions Our intention for web environment integrity is to provide browsers with an alternative to the above checks and make it easier for users to block invasive fingerprinting without breaking safety mechanisms. The objective of WEI is to provide a signal that a device can be trusted, not to share data or signals about the browser on the device.

Maintaining users' access to an open web on all platforms is a critical aspect of the proposal. It is an explicit goal that user agents can browse the web without this proposal, which means we want the user to remain free to modify their browser, install extensions, use Dev tools, and importantly, continue to use accessibility features.

WEI prevents ecosystem lock-in through hold-backs We had proposed a hold-back to prevent lock-in at the platform level. Essentially, some percentage of the time, say 5% or 10%, the WEI attestation would intentionally be omitted, and would look the same as if the user opted-out of WEI or the device is not supported.

This is designed to prevent WEI from becoming “DRM for the web”. Any sites that attempted to restrict browser access based on WEI signals alone would have also restricted access to a significant enough proportion of attestable devices to disincentivize this behavior.

Additionally, and this could be clarified in the explainer more, WEI is an opportunity for developers to use hardware-backed attestation as alternatives to captchas and other privacy-invasive integrity checks.

WEI does not disadvantage browsers that spoof their identity The hold-back and the lack of browser identification in the response provides cover to browsers that spoof their user agents that might otherwise be treated differently by sites. This also includes custom forks of Chromium that web developers create.

Let’s work together on finding the right path We acknowledge facilitating an ecosystem that is open, private, and safe at the same time is a difficult problem, especially when working on the scale and complexity of the web. We welcome collaboration on a solution for scaled anti-abuse that respects user privacy, while maintaining the open nature of the web.

you are viewing a single comment's thread
view the rest of the comments
[–] HaiZhung@feddit.de 2 points 1 year ago (1 children)

As a counterpoint, IMHO Google has the best track record regarding privacy of all the big tech firms. Googles data was never sold, leaked, or abused by employees as far as I can tell.

This is in stark contrast to companies like meta and twitter.

Maybe Google isn’t as good communicating that fact, but what is your reason for the distrust in this particular case?

[–] mrmanager@lemmy.today 8 points 1 year ago* (last edited 1 year ago) (1 children)

Meta and Twitter are social media companies. They have access to peoples tweets. It's similar to having access to these messages you and me are typing, except many people use their own names there.

It's not too bad privacy wise, just social messages.

Google on the other hand has the private searches of billions of people. Everything you put into a search engine because you are worried, afraid, sick, or curious about something.

Google records all this private activity and saves it under your personal profile, and then uses cookies to track every web site you are visiting on the web (using not only Google search but Google analytics cookies that exists on almost every website).

They also combine this data with whatever you are doing on your android phone, or what places you go to using Google maps, or what video meetings you are having with Google meets, what emails you have in Google Mail, what video you watch on YouTube, what calendar events you are having with Google calendar... And so on.

Then they feed all this data into algorithms designed to figure out what you are likely to do next. They sell this data to advertisers so they can target you with ads. They also send this data to American agencies like nsa to be stored and analyzed.

There is a giant difference here between Google and the other companies you mentioned. Google is literally watching moments from people's entire lives, while the others only see your social media messages.

This is why Google is completely absurdly in it's own class of anti-privacy. No other company has this amount of data about people's every moment awake.

Now they use their dominant position to try and take over the entire web, so it's not possible to escape them anymore using a different browser, blocking cookies and tracking, or using another search engine.

If everyone is forced to use their browser, we have lost everything good about the web.

They should be treated like the cancer to a free web they really are.

[–] HaiZhung@feddit.de 2 points 1 year ago* (last edited 1 year ago) (1 children)

Google does not sell data to advertisers, that is incorrect.

You are correct that Google cross-correlates some data for integrating features, but as I said, you can just go and delete your data, and it will continue to work just fine.

Maybe it’s also useful to remind oneself that you do get lots of services from Google for free - and considering they are free (!), imho, Google is taking about the most ethical approach it economically can. (Ie., they will use your data to tune full integration of their products and serve ads for you, BUT you can always opt out and delete it)

I fail to see how meta and twitter are so much different in the range of products they offer. Meta e.g. operates the larges private messaging app on the planet and they DID sell (or accidentally leak, however you want to put it, see Cambridge analytica) their data.

[–] valveman@lemmy.eco.br 2 points 1 year ago* (last edited 1 year ago)

None of these corporations can be trusted at all IMO, simply because they're corporations in the first place, and WILL always choose what's better for them rather than what's better for the community. That's why I advocate for open source every time I can.

And OK, everything you said is true and valid, but go ahead and try to convice the non-tech people to delete their accounts, while explaining all the little comforts they have will be taken away with it. They'll simply laugh at you and carry on. That's how Google and other corporations that follow this "free services" model got so big and influential, and now they're using their size to do what corporations do: increase profits.

Another problem with this model is you can't really tell what Google is doing with the data they collect. Can you/anybody tell Google didn't feed their Bard AI data they collected from you? Can you/anybody tell Google ain't using your/their data for anything except showing targeted ads? AFAIK, you can't. Even if they update their ToS regularly, communicate you they've changed it and "if you continue using the service it means you agreed with the new Terms of Service", do you really think people will actually take the time to read the same 20 page ToS every time it changes? Most people I know don't even read it the first time!

In the end, you may say they're being as ethical as possible, and the users are simply too lazy and everything bad that happens to them is entirely their own fault. You wouldn't be wrong at all, but that's not how the world works.

Also, sorry for the wall of text.