this post was submitted on 19 Jun 2023
152 points (100.0% liked)
Technology
37739 readers
528 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
yeah like. this is just a byproduct of how federation works currently. i don't even know how you'd begin to design a federated system where some of these critiques can't be levied
Anything that is visible to another party can be hijacked - even a 1:1 communication does not guarantee that the other party doesn't capture the data and then spread it. The only things that are private are thoughts that you have which are not shared with others in any fashion. As soon as information is shared in any fashion, it is not private.
Past this point it's a matter of how private you think is reasonably private. You could design a system where users are in control of their own data through a series of public and private keys, ensuring that keys must be active to view content, but as stated above even in such a case and the user revoking keys does not stop other people from making copies of said data. This is akin to screenshotting an NFT. For all intents and purposes, a copy of the data as it existed at the time of copying is now publicly available.
Quibbling over the fact that you're the one who "truly owns" the data when it comes to something like social media feels like a mostly pointless endeavor because the outcome (data is available for others to view/consume/read/etc) is the same regardless of who "owns" it. Copyright law will apply to anything you produce, if it comes to legal problems (someone copies your artwork and sells it, for example) and having a system to prove you own it is primarily a formality to make it easier to prove ownership. Generally people aren't arguing through this lens, however, and are instead arguing through the privacy/security lens - that they don't want people stealing/selling their data, which lol, good luck - AI models are proof that no one in the world actually cares about this ownership if they reasonably think they can get away with using your data without any real incentive to not do so - interestingly copyright law and models being trained on corporate data such as movies are a vector by which the legality of this might actually stop or slow AI development and protect the end-users data.
Yeah, but dick-pics…safe?