drosophila

joined 3 months ago
[–] drosophila@lemmy.blahaj.zone 5 points 3 days ago* (last edited 3 days ago) (2 children)

Some ARM CPUs that are advertised as microcontrollers have 32 bit address spaces and roughly the same power as an i486.

[–] drosophila@lemmy.blahaj.zone 7 points 4 days ago* (last edited 4 days ago) (1 children)

I agree to some extent, as there are plenty of distros that don't do anything significantly different from each other and don't need to exist. I also see what you mean about desktop environments. While I think there's space for all the small exotic window managers that exist, I would say we probably don't need as many big fully integrated desktop environments as there are now. (Maybe we should have only one aimed at modern hardware and one designed to be lightweight.)

That being said, there is plenty of duplication of effort within commerical software too. I would argue that if commercial desktop GUIs currently offer a better user experience than Linux desktop environments it's more in spite of their development model than because of it, and their advantage has mostly to do with companies being able to pay developers to work full time (instead of relying on donations and volunteers).

There are a couple reasons I think this:

  • In a "healthy" market economy there needs to be many firms that offer the same product / service. If there is only a small number (or, worse, only one) that performs the same function the firm(s) can begin to develop monopolistic powers. For closed source software development this necessitates a great deal of duplicated effort.
  • The above point is not a hypothetical situation. Before the rise of libre software there were a ton of commercial unices and mainframe operating systems that were all mostly independently developed from each other. Now, at least when it comes to running servers and supercomputers, almost everyone is running the same kernel (or very nearly the same) and some combination of the same handful of userspace services and utilities.
  • Even as there is duplication of effort between commercial firms, there is duplication of effort and wasted effort within them. For an extreme example look at how many chat applications Google has produced, but the same sort of duplication of effort happens any time a UI or whole application is remade for no other reason than if the people employed somewhere don't look like they're working on something new then they'll be fired.
  • Speaking of changing applications, how many times has a commercial closed source application gone to shit, been abandoned by the company that maintains it, or had its owning company shut down, necessitating a new version of the software be built from scratch by a different firm? This wastes not only the time of the developers but also the users who have to migrate.

Generally I think open source software has a really nice combination of cooperation and competition. The competition encourages experimentation and innovation while the cooperation eliminates duplicated effort (by letting competitors copy each other if they so choose).

[–] drosophila@lemmy.blahaj.zone 14 points 1 week ago

I vibe with this a lot. I don't think the movie needed to exist in the first place, and if it did it would probably be better if it were fully animated, but nothing about the trailer provoked any strong emotions in me.

I'm not going to watch it but I also didn't go "wow this is an insult and a tragedy".

I guess I'm happy for all the tiny children that are gonna watch it and probably love it though.

[–] drosophila@lemmy.blahaj.zone 17 points 1 week ago* (last edited 1 week ago)

Big Bang Theory is less like nerd humor and more like autism blackface.

[–] drosophila@lemmy.blahaj.zone 5 points 1 week ago* (last edited 1 week ago)

This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don't learn using stochastic gradient descent with back-propagation). But to say that there's nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

If you read books such as V.S. Ramachandran and Sandra Blakeslee's Phantoms in the Brain or Oliver Sacks' The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they've done it. They'll tell you what they've made up while believing that they're telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

It is V.S Ramachandran's belief that there are two processes that occur in the Brain, a confabulator (or "yes man" so to speak) and an anomaly detector (or "critic"). The yes-man's job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic's job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

What implications does that have on copyright law? I don't know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

There's a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they're really just whatever works in practice. So, what I'm trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.

[–] drosophila@lemmy.blahaj.zone 3 points 1 week ago (1 children)

It's going to get harder and harder to do that as cellphones get better though.

iPhones already have satellite SOS feature which works worldwide, and are starting to roll out satellite texting for non-emergency use. There are a few Android models that are slated to do the same, and it's only a matter of time before most phones can do this.

There are plenty of phones that are waterproof (or rated for submersion in 5 meters of water for 30 minutes or whatever) and that's only going to become more common too.

My phone lasts for about 2 days on a charge with how much I use it, and I charge it every night. That's only going to get better with better battery technologies (the trend of phones getting thinner in response to increased battery capacity has actually somewhat reversed in recent years).

So, in a classic horror movie scenario with 5 or so people they'd need a reason why every single person is out of charge or has their phone broken. Even if the protagonists can't get themselves out of the situation they're in using their phones (because they're broken or whatever) you still need to answer how they got into that situation in the first place if they have offline maps and GPS navigation. That's not as big of a problem but it eliminates "they got lost" as a premise for why they're in some spooky woods or wherever.

It seems to me that you'd either need to set the story in an abandoned mine or make the antagonist explicitly supernatural.

[–] drosophila@lemmy.blahaj.zone 11 points 1 week ago

Make the page 15x more bloated with JavaScript popups and it'll be "modern".

[–] drosophila@lemmy.blahaj.zone 10 points 1 week ago

While I agree that it's somewhat bad that there is no distinction between lossless and lossy jxl in the file extension, I think it's really not a big deal compared to the present situation with jpg/png.

The reason being that if you download a png file you have no idea if its been converted from jpg, if it's a screenshot of a jpg, or if it's been subjected to lossy reencoding by a tool or a website upload process.

The only thing you can really do to try and see if the file you've downloaded has suffered encoding loss is to do an image search on it and see if there are any better quality versions out there. You'd do the exact same thing with a jxl file.

[–] drosophila@lemmy.blahaj.zone 16 points 1 week ago

There are plenty of applications for machine learning, logic engines, etc. They've been used in many industries since the 1970s.

[–] drosophila@lemmy.blahaj.zone 11 points 2 weeks ago* (last edited 2 weeks ago)

The ads are subliminally manipulating the sort function of my spreadsheet that calculates the unit cost of every product in a category.

[–] drosophila@lemmy.blahaj.zone 2 points 2 weeks ago

You're right that I've never read the 2e and 3e sourcebooks, just 5e and some OSR stuff, but nothing in between.

Most of my experience playing DnD comes from playing in homebrew settings. Maybe the real problem in that case comes from trying to use a roleplaying system that has a bunch of cosmology and mysticism baked into it in a setting that either lacks that or has metaphysics that actively clash with it.

But if so I think that's probably a pretty common experience with how 5e is played.

view more: next ›