this post was submitted on 21 Sep 2023
20 points (81.2% liked)

Programming

17366 readers
184 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
all 23 comments
sorted by: hot top controversial new old
[–] Vent@lemm.ee 36 points 1 year ago (1 children)

This is a ridiculous definition of "real-time". To accomplish this you'd need to subvert the kernal's scheduler, otherwise you'll always end up with "unbounded" response times since a single program can't control what else is running or which clock cycles are allocated to it. What you end up with is an OS that only runs one process per thread.

I’m tempted to abandon using Windows, macOS and Linux as the main platforms with which I interact.

Yeah, okay buddy. And I'm tempted to stop eating and sleeping because I'd like the extra free time.

[–] lysdexic@programming.dev 0 points 1 year ago* (last edited 1 year ago) (1 children)

This is a ridiculous definition of “real-time”. To accomplish this you’d need to subvert the kernal’s scheduler (...)

You missed the whole point of the article.

It makes no sense to read the article and arrive at the conclusion that "I need to subvert the Kernel's scheduler". The whole point of the real-time analogy is that handlers have a hard constraint on the time budget allocated to execute each handler. If your handler is within budget them it's perfectly reasonable to run on the UI thread. If your handler exceeds the budget then user experience starts to suffer, and you need to rework your implementation to run stuff async.

Keep in mind that each mouse click/hover/move/sneeze triggers a handler on GUI applications. Clicking on a button can trigger small, instant changes like updating UI, or can trigger expensive operations like running an expensive task. Some handlers start off as doing small UI updates but end up doing more and more stuff that ultimately start to become noticeable.

[–] Vent@lemm.ee 5 points 1 year ago* (last edited 1 year ago) (1 children)

The article is not talking about async processing. It's talking about the process scheduler and thread blocking. It even has a section titled "Real-time Scheduling" that talks specifically about the process scheduler.

It's simply not possible to fit the author's definition of real-time without using something like an RTOS, and the author seems to understand that. The main feature of an RTOS is a different scheduler implementation that can guarantee cpu time to events. The catch is that an RTOS isn't going to handle general purpose usecases like a personal computer very well since it requires purpose-built programs and won't be great at juggling a lot of different processes at the same time.

[–] lysdexic@programming.dev -2 points 1 year ago

The article is not talking about async processing. It’s talking about the process scheduler and thread blocking.

No, not really.

The article doesn't even cover process scheduling at all. The whole point of the article, which is immediately obvious to anyone who ever worked on a GUI, is what code runs on event handlers and how doing too much in them has a noticeable detrimental impact on user experience (i.e., blocks the main thread).

It's also obvious to anyone who ever worked on a GUI that you free the main thread of these problems by refactoring the application to run some or all code in a problematic handler asynchronously.

[–] harrim4n@feddit.de 10 points 1 year ago (2 children)

Interesting points, I've definitely run into memory constraints which resulted in completely unresponsive hosts a few times over the years. But as the author said, I don't see this changing within any another of time due to the large scale archetectural changes required... The author also mentioned "I’m tempted to abandon using Windows, macOS and Linux as the main platforms with which I interact.". Does anyone know which "daily driver" compatible operating system the author could be referring to?

[–] Sanctus@lemmy.world 2 points 1 year ago (1 children)

FreeBSD? Gentoo? There aren't really that many options that are maintained depending on their definition of Linux. Might as well not use any computers at that point because you'd be on TempleOS with no internet and barely any colors.

[–] odium@programming.dev 1 points 1 year ago (2 children)
[–] 0x0@programming.dev 2 points 1 year ago

Getoo's linux, yeah. in their wiki they mention support for real-time kernels.

[–] Sanctus@lemmy.world 1 points 1 year ago (1 children)

I have never even looked at it before so I just did. It is. It looks more complicated than arch. But everything compiles locally which for some reason swoons me. I just got into Linux 2 uears ago, and just got on arch. So that may be my next target.

[–] 0x0@programming.dev 1 points 1 year ago

USE flags are addictive.

[–] ICastFist@programming.dev 1 points 1 year ago

It's anyone's guess, really. I can think of a few "exotic" OSs, like Solaris, Haiku (BeOS like), AROS (Amiga like and compatible), Kolibri and Risc OS, but I doubt the author would use any of those in any capacity.

[–] Solemarc@lemmy.world 5 points 1 year ago (1 children)

Maybe I'm dumb because I'm a backend dev, but if we can't offload these tasks to Async tasks and we need to block the main thread, why can't we just put up a loading screen? "Don't turn off the application we are saving" games have been doing this for a decade and you can't convince me that your enterprise application is heavier than a AAA game.

[–] lysdexic@programming.dev 0 points 1 year ago* (last edited 1 year ago) (1 children)

Maybe I’m dumb because I’m a backend dev, but if we can’t offload these tasks to Async tasks and we need to block the main thread, why can’t we just put up a loading screen?

That's not the problem. These tasks can be offloaded to async. The underlying issue, and the reason why I think this is an outstanding article, is that running code on the UI thread straight from handlers is easy and more often than not it goes perfectly unnoticed. Only when the execution time of those handlers grow do these blocking calls become an issue.

There's a gray area between "obviously we need to make these calls async" and "obviously we can run this on the main thread", and here's where the real-time mental model and techniques pay off.

“Don’t turn off the application we are saving” games have been doing this for a decade and you can’t convince me that your enterprise application is heavier than a AAA game.

You're missing the whole point.

The point is that running handlers in the main thread leads to far simpler code and, depending on the usecases, is adopted in scenarios where the approach works well with 99.9% of the conceivable usecases. But then the software starts to be modified and get features added, and some of these code paths start to do more things and take more time to run. When this happens, the 99.9% starts to shrink and some main thread blockages start to become more and more noticeable.

The article does a very good job in underlying the mental model that needs to be in place to avoid this slippery slope to become a problem.

[–] atheken@programming.dev 8 points 1 year ago* (last edited 1 year ago) (1 children)

The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

I also think the author handwaves “too many blocking calls end up on the main thread.”

Hardly. This is like rule zero for building gui apps. Put any non-trivial or blocking work on a background thread. It was harder to do before mainstream languages got good green thread/async support, but it’s almost trivial now.

I agree that there are still calls that could have variable response times (such as virtual memory being paged in or out), but even low-end machines are RAM-rich and SSDs are damn fast. The kernel is likely also doing some optimization to page stuff in from disk for the foreground app.

It’s nice to think through the issue, but I don’t think it’s quite as dire as the author claims.

[–] lysdexic@programming.dev -1 points 1 year ago* (last edited 1 year ago) (1 children)

The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

I don't think that's a valid take from the article.

The whole point of the article is that if a handler from a GUI application runs for too long then the application will noticeably block and degrade the user experience.

The real time mindset is critical to be mindful of this failure mode: handlers should have a time budget (compute, waiting dor IO, etc), beyond which the user experience degrades.

The whole point is that GUI applications, just like real-time applications, must be designed with these execution budgets in mind, and once they are not met them the application needs to be redesigned avoid these issues.

[–] atheken@programming.dev 4 points 1 year ago (1 children)

Which is what putting most of this stuff on the background accomplishes. It necessitates designing the UX with appropriate feedback. Sometimes you can’t make things go faster than they go. For example, a web request, or pulling data from an ancient disk that a user is using - you as an author don’t have control over these, the OS doesn’t even have control over them.

Should software that depends on external resources refuse to run?

The author is talking about switching to some RTOS due to this, which is extreme. OS vendors have spent decades trying to sort out the “Beachball of Death” issue, that is exceedingly rare on modern systems, due to better multi-tasking support, and dramatically faster hardware.

Most GUI apps are not hard RT and trying to make them so would be incredibly costly and severely limit other aspects of systems that users regularly prefer (like keeping 100 apps and browser tabs open).

[–] lysdexic@programming.dev 1 points 1 year ago* (last edited 1 year ago) (1 children)

Which is what putting most of this stuff on the background accomplishes.

The part you're missing entirely is the complexity that's hidden behind the weasel word "most".

The majority of event handlers from a GUI app do not do anything complex, computationally expensive, or blocking. They do things like setting flags, trigger changes in the UI state (i.e., show/hide/update widgets) bump counters, etc.

No one in their right mind would ever consider going through the trouble of doing this stuff in separate threads/processes. "Most" handlers run perfectly fine on the main thread.

Nevertheless software changes, and today's onClick handler that sets a flag to true/false tomorrow is required to emit a metric event or switch a different treatment depending on the state of a feature flag or A/B test, or is required to write a setting to disk or something like that.

How do you draw the line in the sand that tells whether this handler should run on the main thread, should trigger a fire-and-forget background task, or should be covered by a dedicated userflow with a complete story board?

That's the stuff that's hand-waved away with weasel words like "most".

This blog post delivers a crisp mental model to tell which approach is suitable: follow the real time computing rulebook, acknowledge that each and every handler has a time budget, and if a handler overspends it's budget them it needs to be refactored.

[–] Jummit@lemmy.one 2 points 1 year ago (1 children)

Interesting viewpoint, but I think the applications aren't at fault: The operating system should ensure that the user has control of the computer at all times. I think you need to do three things to achieve that:

  1. Limit process RAM usage, so the system never has to swap
  2. Limit process CPU usage, so the system never stalls
  3. When drivers / the operating itself crash, revert into a usable state (this one is probably the most complex one)
[–] lysdexic@programming.dev 5 points 1 year ago

Interesting viewpoint, but I think the applications aren’t at fault: The operating system should ensure that the user has control of the computer at all times.

The whole point us that the OS does ensure that the user has control of the computer, at least as far as a time-sharing system goes. The problem is that the user (or the software they run) often runs code on the main thread that blocks it.

The real-time mentality towards constraints on how much can be executed by a handler is critical to avoid these issues, and it should drive the decision on whether to keep running a handler on the main thread or get it to trigger an async call.

[–] verdare@beehaw.org -2 points 1 year ago* (last edited 1 year ago) (1 children)

It is somewhat baffling that most interactive, consumer-facing operating systems are not real-time. I suppose that it’s a product of legacy and technical debt.

Apple did announce that they’re using an RTOS in the Vision Pro. Maybe the VR/AR space will make this more common, since the latency requirements are more stringent.

[–] TerrorBite@meow.social 3 points 1 year ago (1 children)

@verdare @lysdexic they are, but you have to be an enterprise customer.

https://ubuntu.com/blog/real-time-ubuntu-is-now-generally-available

https://learn.microsoft.com/en-us/windows/iot/iot-enterprise/soft-real-time/soft-real-time

RTOS are not going to become consumer operating systems, because there's too much value in selling it as a capability to enterprise customers (who are largely the consumers who REQUIRE a RTOS, rather than it merely being a convenience).

[–] 0x0@programming.dev 6 points 1 year ago

How can it be convenient for the desktop user to severely limit the amount or running processes? Desktop usage scenarios are the opposite of RT usage.