this post was submitted on 06 May 2024
71 points (97.3% liked)

Linux

47952 readers
1705 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

The issue at hand: My /var/tmp folder is stacking up on literary hundreds of folders called "container_images_storage_xxxxxxxxxx", where the x's present a random number. Each folder contains the following files called 1, 2 and 3 as seen in thumbnail. Each folder seems to increase in size too, as the lowest I can see is the size of 142.2 MiB, but the highest 2.1GB. This is a problem as it is taking up all my disk space, and even if I do delete them, they come back the next day... I believe this has something to do with podman, but I'm really not quite sure. All I use the PC for is browsing and gaming.

Is there a way to figure out where a file or folder is coming from on Linux? I've tried stat and file, but neither gave me any worthwhile information AFAIK. Would really appreciate some help to figure what causes this, I am still new to the Linux desktop and have no idea what is causing this issue. I am on atomic desktop, using Bazzite:latest.

stat:

stat 1
  File: 1
  Size: 1944283388	Blocks: 3797432    IO Block: 4096   regular file
Device: 0,74	Inode: 10938462619658088921  Links: 1
Access: (0600/-rw-------)  Uid: ( 1000/    buzz)   Gid: ( 1000/    buzz)
Context: system_u:object_r:fusefs_t:s0
Access: 2024-05-06 12:18:37.444074823 +0200
Modify: 2024-05-06 12:22:51.026500682 +0200
Change: 2024-05-06 12:22:51.026500682 +0200
 Birth: -

file

file 1
1: gzip compressed data, original size modulo 2^32 2426514442 gzip compressed data, reserved method, ASCII, extra field, encrypted, from FAT filesystem (MS-DOS, OS/2, NT), original size modulo 2^32 2426514442
you are viewing a single comment's thread
view the rest of the comments
[–] Sunny 14 points 5 months ago* (last edited 5 months ago) (1 children)

ouh, nice find! when I do podman info i do fine one line that says

imageCopyTmpDir: /var/tmp

so this must be it? I have had one distrobox set up using boxbuddy, could that be it?

[–] atzanteol@sh.itjust.works 18 points 5 months ago (1 children)

Give this a go:

podman system prune

See if it frees up any space. But it does seem like you're running containers (which makes sense given you're on an immutable distro) so I would expect to be using lots of temporary space for container images.

[–] Sunny 1 points 5 months ago* (last edited 5 months ago) (1 children)

right so podman system prune does save some space, but not much. I still see the folders popping in right after having used the command. Also podman ps --all doesnt list a single container :<

[–] atzanteol@sh.itjust.works 3 points 5 months ago (1 children)

My guess is that you're using some other form of containers then, there are several. It's a common practice with immutable distros though I don't know much about bazzite itself.

Are these files large? Are they causing a problem? Growing without end? Or just "sitting there" and you're wondering why?

[–] Sunny 2 points 5 months ago (1 children)

Growing without and end, each file varies in size, one being bigger than the other, as I wrote in the description of the post. They will continue to stack up until it fills my entire 1TB SSD, then KDE will complain i have no storage left.

I dont have docker installed and Podman ps --all says I have no containers... So im kind of lost at sea with this one.

[–] atzanteol@sh.itjust.works 3 points 5 months ago (1 children)

Those aren't the only containers. It could be containrd, lxc, etc.

One thing that might help track it down could be running sudo lsof | grep '/var/tmp'. If any of those files are currently opened it should list the process that hold the file handle.

"lsof" is "list open files". Run without parameters it just lists everything.

[–] Sunny 1 points 5 months ago (1 children)

Thanks for helping out! the command u gave me, plus opening one of the files gives the following output, I dont really know what to make of it;

buzz@fedora:~$ sudo lsof | grep '/var/tmp/'
lsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc
      Output information may be incomplete.
podman    10445                            buzz   15w      REG               0,41   867465454    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10446 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10447 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10448 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10449 podman               buzz   15w      REG               0,41   867399918    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10450 podman               buzz   15w      REG               0,41   867416302    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10451 podman               buzz   15w      REG               0,41   867416302    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10452 podman               buzz   15w      REG               0,41   867416302    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10453 podman               buzz   15w      REG               0,41   867432686    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10454 podman               buzz   15w      REG               0,41   867432686    2315172 /var/tmp/container_images_storage1375523811/1
podman    10445 10455 podman               buzz   15w      REG               0,41   867432686    2315172 /var/tmp/container_images_storage1375523811/1

continues...
[–] atzanteol@sh.itjust.works 3 points 5 months ago (1 children)

Aha! Looks like it is podman then.

So - there are a few different types of resources podman manages.

  • containers - These are instances of an image and the thing that "runs". podman container ls
  • images - These are disk images (actually multiple but don't worry about that) that are used to run a container. podman image ls
  • volumes - These are persistent storage that can be used between runs for containers since they are often ephemeral. podman volume ls

When you do a "prune" it only removes resources that aren't in use. It could be that you have some container that references a volume that keeps it around. Maybe there's a process that spins up and runs the container on a schedule, dunno. The above podman commands might help find a name of something that can be helpful.

[–] Sunny 2 points 5 months ago (1 children)

aha! Found three volumes! had not checked volumes uptil now, frankly never used podman so this is all new to me... Using podman inspect volume gives me this on the first volume;

[
     {
          "Name": "e22436bd2487a197084decd0383a32a39be8a4fcb1ded6a05721c2a7363f43c8",
          "Driver": "local",
          "Mountpoint": "/var/home/buzz/.local/share/containers/storage/volumes/e22436bd2487a197084decd0383a32a39be8a4fcb1ded6a05721c2a7363f43c8/_data",
          "CreatedAt": "2024-03-15T23:52:10.800764956+01:00",
          "Labels": {},
          "Scope": "local",
          "Options": {},
          "UID": 1,
          "GID": 1,
          "Anonymous": true,
          "MountCount": 0,
          "NeedsCopyUp": true,
          "LockNumber": 1
     }
]
[–] atzanteol@sh.itjust.works 3 points 5 months ago (1 children)

Navigating the various things podman/docker allocate can be a bit annoying. The cli tools don't make it terribly obvious either.

You can try using docker volume rm name to remove them. It may tell you they're in use and then you'll need to find the container using them.

[–] SimplyTadpole@lemmy.dbzer0.com 1 points 5 months ago (1 children)

Does all this also apply to distrobox? I don't use podman, but I do use distrobox, which I think is a front-end for it, but I don't know if the commands listed here would be the same.

[–] atzanteol@sh.itjust.works 2 points 5 months ago

I'm not terribly familiar with distrobox unfortunately. If it's a front end for podman then you can probably use the podman commands to clean up after it? Not sure if that's the "correct" way to do it though.