this post was submitted on 07 Jun 2023
142 points (99.3% liked)
Asklemmy
43945 readers
582 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yes, I've seen that pattern before, but:
If they're meant to run on the same machine and are bundled together in the same container image, I would call that a questionable design choice.
Well, I have only my own experience to go on, but I am not usually bothered by compile times. I used to compile my own Linux kernels, for goodness' sake. I would just leave it to do its thing and go do something else while I wait. Not a big deal.
Again, there are exceptions like Chromium, which take an obscenely long time to compile, but I assume we're talking about something that takes minutes to compile, not hours or days.
No, I'm not. If you're not using JIT compilation, the overhead of dynamic linking is severe, not because of how long it takes to call a dynamically-linked function (you're right, that part is reasonably fast), but because inlining across a dynamic link is impossible, and inlining is, as matklad once put it, the mother of all other optimizations. Dynamic linking leaves potentially a lot of performance on the table.
This wasn't the case before link-time optimization was a thing, mind you, but it is now.
Okay, but I'm much more concerned with execution speed and memory usage than with how long it takes to download or compile an executable.
In the time i was thinking about some kind of toolkit installed though distrobox. Distrobox, basically, allows you to use anything from containers as if it was not. It uses podman, so i guess it could be impossible to use docker for GUI, although i cant really tell.
Yes, but static linking means you'll get security and performance patches with some delay, while dynamic means you'll get patches ASAP.
Some claim this doesn't work in practice because of the ABI issues I mentioned earlier. You brought up Semver as a solution, but that too doesn't seem to work in practice; see for example OpenSSL, which follows Semver and still has ABI issues that can result in undefined behavior. Ironically this can create security vulnerabilities.
Yeah, but there's by lot more security improvement by having ability to apply fix for severe vulnerability ASAP than weakening from possible incompativilities. Also, i wonder why i never brought it up, shared libs are shared, so you can use them across many programming languages. So, no, static is not the way to replace containers with dynamic linking, but yes, they share some use cases.
Um, we're talking about undefined behavior here. That creates potential RCE vulnerabilities—the most severe kind of vulnerability. So no, a botched dynamically-linked library update can easily create a vulnerability worse than the one it's meant to fix.
Shared libraries are shared among processes, not programming languages.