this post was submitted on 13 Sep 2024
463 points (99.4% liked)

Technology

59582 readers
4354 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] beerclue@lemmy.world 5 points 2 months ago (1 children)

I use docker at home and at work, nexus at work too. I really don't understand... even a malfunctioning service should not pull the image over and over, there should be a cache... It could be some fringe case, but I have never experienced it.

[โ€“] gencha@lemm.ee -1 points 2 months ago

Ultimately, it doesn't matter what caused you to be blocked from Docker Hub due to rate-limiting. When you're in that scenario, it's most cost efficient to buy your way out.

If you can't even imagine what would lead up to such a situation, congratulations, because it really sucks.

Yes, there should be a cache. But sometimes people force pull images on service start, to ensure they get the latest "latest" tag. Every tag floats, not just "latest". Lots of people don't pin digests in their OCI references. This almost implies wanting to refresh cached tags regularly. Especially when you start critical services, you might pull their tag in case it drifted.

Consider you have multiple hosts in your home lab, all running a good couple services, you roll out that new container runtime upgrade to your network, it resets all caches and restarts all services. Some pulls fail. Some of them are for DNS and other critical services. Suddenly your entire network is down, and you can't even get on the Internet, because your pihole doesn't start. You can't recover, because you're rate-limited.

I've been there a couple of times until I worked on better resilience, but relying on docker.io is still a problem in general. I did pay them for quite some time.

This is only one scenario where their service bit me. As a developer, it gets even more unpleasant, and I'm not talking commercial.