this post was submitted on 21 Jul 2023
929 points (100.0% liked)

Technology

37730 readers
742 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

The much maligned "Trusted Computing" idea requires that the party you are supposed to trust deserves to be trusted, and Google is DEFINITELY NOT worthy of being trusted, this is a naked power grab to destroy the open web for Google's ad profits no matter the consequences, this would put heavy surveillance in Google's hands, this would eliminate ad-blocking, this would break any and all accessibility features, this would obliterate any competing platform, this is very much opposed to what the web is.

you are viewing a single comment's thread
view the rest of the comments
[–] heliodorh@beehaw.org 33 points 1 year ago (12 children)

I'm a non-techie and don't understand half of this, but from what I do understand, this is a goddamn nightmare. The world is seriously going to shit.

[–] JVT038@feddit.nl 54 points 1 year ago (10 children)

My ELI5 version:

Basically, the 'Web Environment Integrity' proposal is a new technique that verifies whether a visitor of a website is actually a human or a bot.

Currently, there are captchas where you need to select all the crosswalks, cars, bicycles, etc. which checks whether you're a bot, but this can sometimes be bypassed by the bots themselves.

This new 'Web Environment Integrity' thing goes as follows:

  1. You visit a website
  2. Website wants to know whether you're a human or a bot.
  3. Your browser (or the 'client') will send request an 'environment attestation' from an 'attester'. This means that your browser (such as Firefox or Chrome) will request approval from some third-party (like Google or something) and the third-party (which is referred to as 'attester') will send your browser a message, which basically says 'This user is a bot' or 'This user is a human being'.
  4. Your browser receives this message and will then send it to the website, together with the 'attester public key'. The 'attester public key' can be used by the website to verify whether the attester (a.k.a. the third-party checking whether you're a human or not) is trustworthy and will then check whether the attester says that you're a human or not.

I hope this clears things up and if I misinterpreted the GitHub explainer, please correct me.

The reason people (rightfully) worry about this, is because it gives attesters A LOT of power. If Google decides they don't like you, they won't tell the website that you're a human. Or maybe, if Google doesn't like the website you're trying to visit, they won't even cooperate with attesting. Lots of things can go wrong here.

[–] HarkMahlberg@kbin.social 19 points 1 year ago (2 children)

Your final paragraph is the real kicker. Google would love nothing more than to be the ONLY trusted Attester and for Chrome to be the ONLY browser that receives the "Human" flag.

[–] will6789@feddit.uk 11 points 1 year ago

And I'm sure Google definitely wouldn't require your copy of Chrome to be free of any Ad-Blocking or Anti-Tracking extensions to get that "Human" flag /s

load more comments (1 replies)
load more comments (8 replies)
load more comments (9 replies)