this post was submitted on 14 Feb 2024
20 points (95.5% liked)

Permacomputing

655 readers
2 users here now

Computing to support life on Earth

Computing in the age of climate crisis is often wasteful and adds nothing useful to our real life communities. Here we try to find out how to change that.

Definition and purpose of permacomputing: http://viznut.fi/files/texts-en/permacomputing.html

XMPP chat: https://movim.slrpnk.net/chat/lowtech%40chat.disroot.org/room

Sister community over at lemmy.sdf.org: !permacomputing@lemmy.sdf.org

There's also a wiki: https://permacomputing.net/

Website: http://wiki.xxiivv.com/site/permacomputing.html

founded 1 year ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] agent_flounder@lemmy.world 10 points 9 months ago

Is it just me or does this article seem a bit over simplified, slightly inaccurate, and rant-y? The author makes a lot or claims I feel need verifying so that's why I haven't answered my question yet.

For example:

Computers aren't much faster now than they were a decade ago, and they will probably never again return to the rate of performance improvement they had for 60 years up to the mid-noughties.

The author goes on to discuss Moore's law being dead and Amdahl's law which is sort of a law of diminishing returns on core counts.

While we may not be getting performance improvements like we used to, saying that computers now aren't much faster than a decade ago is sort of sensafionalized.

What does "much" mean in numbers? Take a look at CPU benchmarks and you can see just how much processor speed has improved. I find it noticeable.

And the author didn't mention microcode optimisation work that's been done since the mid 10's, sort of an odd oversight on the author's part.

More to the point of the article, another example is the comparison of Project Oberon vs modern Ubuntu meant to demonstrate how bloated code is. The facts cited may well be perfectly accurate and the claim that modern software is bloated is quite likely true, but the comparison is disingenuous for several reasons.

First, the project was specifically dedicated to making a very lean, understandable system--in the mid 80s when system memory was measured in kilobytes or maybe megabytes.

Multitasking operating systems of that era couldn't do anywhere near as much as they can now. And computers were almost all text based. Or had very low resolution graphics. Many modern protocols didn't exist then. No HTTP so no web servers. No ssh clients and demons. Etc. So comparing modern Ubuntu to an ancient project intended to be lean for its day isn't useful.

There's also no mention of size in terms of machine instructions, just source code lines... Between two different languages. So that's also an unfair comparison.

And there's no mention of what was or wasn't included in the line counts for Ubuntu. Maybe it is the whole distro, maybe all packages, who knows.

The main claim that systems are too big to understand. I mean that's the whole raison d'etre of a systems engineering and software engineering approaches: Break down complex problems and solutions into manageable chunks and distribute work.

If we limited ourselves to solutions that a single person could understand in its entirety we would be missing a lot of modern technologies. Like cars, satellites, airplanes, televisions, smartphones, etc.

[–] mp3@lemmy.ca 5 points 9 months ago (1 children)

Is the Unix philosophy a good solution to this problem?

[–] poVoq 7 points 9 months ago

Depends. On servers a similar concept is called micro-services and it usually increases complexity as now you also need to code all the interaction between formally independent services.

[–] perestroika 4 points 9 months ago* (last edited 9 months ago) (2 children)

These projects are so incomprehensibly vast that no human mind can comprehend even one small isolated subset of the entire thing.

Which means - no human mind can trust them either, and no human programmer alone can conduct a security review.

Which means they should not be trusted, and should be considered insecure - unless they can be carefully isolated from the environment so that only a trusted surface is exposed.

My ideal project size: something that an average coder can read in a week or two, and come back to their (possibly anarchist) colleagues saying: "this code looks reliable and won't be leaking buckets".

[–] demesisx@infosec.pub 8 points 9 months ago* (last edited 9 months ago) (1 children)
[–] perestroika 2 points 9 months ago (1 children)

Some quite interesting articles, thank you - I read two and quick-scanned some more.

Formal verification seems the way to go if you do build a system top down.

What tends to ruin the day is when you need to use components of somewhat random origin. In addition to the OS, you have device drivers, often supplied by vendors who cannot be bothered to go beyond the minimum, you have applications, some of which are legacy and some poorly maintained, written in 12 different programming languages and using 12 different network protocols, some having great traction (thus being necessary) but designed as they were in those days...

...in a chaotic environment, formal verification seems like a big burden to carry. It therefore seems like a method which a rich organization can afford, but not something a small or poor organization can use.

For smaller players, I think minimizing and simplifying their tools may be more accessible. If one can throw out most components, limit the contact surface to something small, use the simplest tools on that surface and make their review easier - I think one can get reasonably high assurance without all-encompassing verification...

...but then again, if you build a manned aircraft, spacecraft or submersible, or a medical apparatus or life support system, then I think formal verification would be a really good idea. :)

[–] demesisx@infosec.pub 6 points 9 months ago* (last edited 9 months ago)

Thanks for taking a look. I agree that it is REALLY challenging (and currently impossible) to maintain that kind of formality in most codebases. Where it matters, though, it really helps.

I got interested in formal verification in the context of a future (currently vaporware 😅), formally verified version of the Cardano node running on a Formally Verified flavor that utilizes the RISC-V architecture. I’m just trying to put it on the white beards’ radar over there at IOG (it’s WAY out of my wheelhouse and perhaps even many of theirs too!) because this kind of thing, if fully open source (as that whole codebase is), could make this level of sophistication much more achievable by mere mortals.

Another amazing company that got me into this level of sophistication was Runtime Verification. If you like looking at projects that are absolutely world-changing, look into their K framework. They’ve created a whole framework for formally specifying whole programming languages (and even the LLVM!) and can pretty much use it as a Babel Fish for converting code from one programming language to another, which I always thought was impossible. But if there’s any outfit that could accomplish it, it might be those geniuses who regularly work with companies like NASA and aerospace behemoths.

[–] toaster 2 points 9 months ago

Modularity and microservice is great for this. That way, you can inspect an individual component with its own responsibility (true to SRP) without trudging through a monolithic codebase.

[–] ProdigalFrog 4 points 9 months ago (1 children)

Excellent article. I hope the pushback against software complexity grains more momentum. I can only think of a couple other people who rail on this topic, like Drew DeVault's blog posts and Jonathan Blow's talk.

[–] poVoq 7 points 9 months ago

There is the entire "suckless" software movement that has also been railing against software complexity, but they have other issues.