this post was submitted on 05 Mar 2024
91 points (87.0% liked)

Linux

48083 readers
774 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] atzanteol@sh.itjust.works 45 points 8 months ago* (last edited 8 months ago) (1 children)

A lot of the reasons for disk partition are pretty legacy.

/boot was/is often a separate partition because the boot loaders may not support all filesystems. So if you wanted to use LVM, reiserfs, zfs, etc. for your root directory or have a RAID root then you may have a separate partition for /boot. Especially on older systems where grub/lilo only supported ext file systems. On modern systems there is likely to be a /boot/efi partition for UEFI (it only supports vfat I think?).

/home is often handy to keep separate since you can more easily re-format everything except your home drive. Make distro-hopping a bit easier.

The other reasons are more focused on server-usage rather than home-usage. Things like mounting /tmp on a separate FS so that users couldn't fill up disk space that would block other users from working in their home directory. Or /usr/local being an NFS mount to provide centralized applications.

These days the actual on-disk partitions don't matter as much due to LVM, ZFS and BTRFS. You can now slice and dice your disks however you like and even change things on the fly. I only ever create 1 disk partition anymore (2 if I need a separate /boot or /boot/efi) and then handle the rest in the filesystems or LVM.

With these higher-level partitioning the benefits are more around snapshotting and backups. You can snapshot your /home partition easily with btrfs before making major changes. Or you can copy a zfs partition to a remote server for backup. Things like the immutable distros and proxmox use this functionality a lot since a) partitions in these tools are cheap and b) it's easier to do these things at the partition-level.

Edit: Fun fact: Linux ext* filesystems have the capability to reserve a certain percentage of disk space only for the root user. Useful on a multi-user system where you don't want users filling up all the disk space and blocking the root user from logging in to clean it all up. It used to save something like 5-10% by default but I don't know if that's the case anymore. You can see if it's being done with tune2fs -l <device>.

[–] hex_m_hell 2 points 8 months ago* (last edited 8 months ago)

When I was first using Linux, the bios could only read the first 100 megs of your hard drive. Your bootloader and config had to be there and your initrd and kernel had to fit in there as well. It was a lot easier to keep things small also. A coworker of mine and I built a 100 meg Linux distro to pxe boot thin clients.

/Me wonders off and starts muttering about calculaing mode lines on a CRT...

[–] zenharbinger@lemmy.world 34 points 8 months ago (2 children)

some partitions are useful. Keeping /var and /tmp separate can stop DoS attacks by now allowing logs to fill the entire drive /home means you can wipe the / partition and keep user data.

[–] limelight79@lemm.ee 12 points 8 months ago (1 children)

I've had a full /var partition cause all sorts of problems using the system. But I still think it's good to have four partitions /, /var, /tmp, and /home. At least split out /home so you can format / without losing your stuff in /home.

[–] scratchandgame@lemmy.ml 2 points 8 months ago (1 children)

I think it is better to partition /usr (and /usr/local) too, for stability and security

[–] limelight79@lemm.ee 2 points 8 months ago (1 children)

I can definitely see doing that on a server many people are using. For my personal server, I used to do that, but in the end I couldn't find much benefit, and only headache ("ahhhh / is short on space because I forgot to clean up old kernels...").

[–] scratchandgame@lemmy.ml 3 points 8 months ago* (last edited 8 months ago) (1 children)

I think it would save you someday, when there is nothing writing in /usr so the writing in /home would not cause much damage. On a system with a huge root partition, an incomplete writing might damage the whole filesystem.

Fsck would be faster. newfs (mkfs) would be faster. I found NetBSD spend so much time when it do newfs a 32G root partition (installing NetBSD in hyper-v).

Also for the /tmp partition, we can use memory filesystem (tmpfs) if we have 4G of RAM or more, instead of physical disk to store things that are cleaned on reboot.

[–] limelight79@lemm.ee 2 points 8 months ago

I'm not saying it can't happen, but I've been using Linux since the late 90s and have never had a problem with an incomplete write damaging the file system, or really anything else (except for a recent incident when a new motherboard decided to overwrite the partition tables on my RAID5 array, but that's a different story). And I have UPSs on the server and desktop, and of course the laptop has a battery in it, so the risk of sudden power loss is extremely low.

The /tmp thing in RAM is interesting. I was reconfiguring my server's drive the other day, because I didn't originally allocate enough space to /var - it worked fine for years until I started playing with plex, jellyfin, and Home Assistant (the latter due to the database size). I was shocked to find /tmp only had a few files in it, after running for years. I think I switched the server to Debian in 2018 or 2019, but that's just a guess based on the file dates I'm seeing. Maybe Debian cleans the /tmp partition regularly.

[–] emptyother@programming.dev 7 points 8 months ago (4 children)

Damn I've always wanted Windows to have that. Being able to put user folders on another partition, or even another drive, at install time. And being able to use "dynamic disk" (aka software raid) to expand partitions across disks as storage requirements grow. I know it is possible to setup, but with a lot of workarounds and annoying problems.

[–] Magickmaster@feddit.de 20 points 8 months ago (1 children)

Windows user folders are nearly unusable in my opinion, too many programs throw in random folders and files everywhere. Especially the Documents folder, too many games putting incoherent stuff in there

[–] emptyother@programming.dev 10 points 8 months ago (1 children)

Jup, useless folder. There's one related thing I've complained a lot about lately, so I'm gonna complain some more about it:

Microsoft got this "great" idea of trying to repeatedly trick me into uploading that Documents folder to the cloud. A folder filled with GBytes of Battlefield and Assassins Creed cache files, Starfield mods, MS database files, etc... A lot of files that are in constant change, or locked the entire session. Annoying as hell. I love Onedrive, but I dont know why its so damn important for them to have those files.

Sometimes I really wish I could switch to some Linux distro instead.

[–] rtxn@lemmy.world 10 points 8 months ago* (last edited 8 months ago) (1 children)

It's asinine that Onedrive doesn't have an equivalent of the decades-old gitignore technology...

There seems to be a workaround, though - archive link. It should work as long as the local and remote conflict remains unresolved, or Microsoft decides to just push the remote onto the local machine and delete your files instead.

[–] timbuck2themoon@sh.itjust.works 3 points 8 months ago

Except ms wants you running out of space and upgrading to a higher level tier. Upton Sinclair and all that.

[–] rtxn@lemmy.world 8 points 8 months ago* (last edited 8 months ago) (2 children)

I'm pretty sure you can just mount a volume to C:\Users.

I definitely wouldn't recommend changing the userdir paths in the system. Many of the office computers I work with are set up that way and it's always a pain in the ass when an application expects the home path to be located on C:.

[–] gravitas_deficiency@sh.itjust.works 4 points 8 months ago* (last edited 8 months ago) (1 children)

when an application expects the home path to be located on C:

Clarification: does NTFS just suck at understanding that a directory-mapped storage device mounted under C: should be treated as if it were C: when within the mount dir?

[–] rtxn@lemmy.world 6 points 8 months ago* (last edited 8 months ago)

The second paragraph is about changing the path where Windows should look for the user files (analogous to running usermod -h /new/home user to change the user entry in the passwd file), not changing the filesystem. I don't see any reason why a directory-mapped device would behave any differently than a regular directlry... although in my brief time working with softlinks and directory junctions, I learned not to have expectations of Windows/NTFS.

I think the issue is that Windows stores the home path in two environment variables -- HOMEDRIVE contains the drive letter, and HOMEPATH contains the path relative to the drive's root (no, I'm not willing to call it an absolute path). If an application only uses the HOMEPATH envvar, the full path will default to whichever drive letter the environment's working directory belongs to, which is most likely C:. I don't have a Windows machine to test it though, so I might be wrong.

load more comments (1 replies)
[–] maxprime@lemmy.ml 3 points 8 months ago

I remember doing this in macOS, when I got my first SSD. I installed it and kept the os on the SSD and mapped my user directory to my hdd. It made upgrades and re-installs much easier, which was a plus because it was actually a hackintosh.

[–] scratchandgame@lemmy.ml 2 points 8 months ago

It isn't possible :)

Windows' filesystem is different to unix, and it is much flawed.

[–] Ilgaz@lemm.ee 19 points 8 months ago (1 children)

A separate /home can save you hours or even days in several occasions however don't try crazy things like trying to have KDE of Ubuntu share same theme/settings with KDE6. A /var on a fast drive can create wonders too.

[–] psivchaz@reddthat.com 3 points 8 months ago (2 children)

I'm trying out something mildly nutty by putting .steam in /home/steam, then making user-neon, and symlinking so that I can try kde without reinstalling steam games. If I succeed I might try it with other files.

[–] Ilgaz@lemm.ee 3 points 8 months ago* (last edited 8 months ago)

First of all you can check distrobox.it which can basically run Neon inside your distribution however you better set a different virtual home for neon in that case.

I would first tar the .steam to be on the safe side but steam is different, it is some kind of Ubuntu stable itself residing in that directory. Not a big time gamer but people laughed at Ubuntu for shipping its snap because of it.

Long story short I don't think steam would have issues. I meant not to expect KDE guys to revert upgraded preferences back to KDE5 etc. You know they do such things and blame Linux/KDE etc.

[–] Mouette@jlai.lu 2 points 8 months ago* (last edited 8 months ago) (1 children)

I've created a specific partition for steam games so I can use games across distro without reinstalling them. You can tell Steam to go look in your partition for your games

[–] webghost0101@sopuli.xyz 1 points 8 months ago* (last edited 8 months ago)

I use my windows drive as a junk drawer for large programs in linux. Comes with the same benefit, fully accessible from either system.

[–] Drito@sh.itjust.works 17 points 8 months ago

I installed Arch on a disk without erasing the /home partition that cames from a previous distro. It saves me some config work, and a bit of disk life expectancy I guess.

[–] utopiah@lemmy.ml 13 points 8 months ago (1 children)

At least have a dedicated /home partition. This way if you want to upgrade the OS, change distribution, heck even migrate to a totally different OS your actual data is safe. Also if you need to do a backup, "just" backup /home which is probably going to be significantly faster and convenient than the entire OS. It also avoid using e.g dd and get a rather opaque file.

TL;DR: yes /home keeps your data safe

[–] avidamoeba@lemmy.ca 4 points 8 months ago* (last edited 8 months ago) (2 children)

What's the benefit of dd-ing a home partition over rsync-a-ing a home directory's contents?

[–] utopiah@lemmy.ml 2 points 8 months ago

Well it'd result in a single file which if you have to copy on a microSD or USD stick might be easier. To also counter my own argument the result of dd can be mounted thus getting a rather useful directory quickly

But anyway my point was rather the opposite, that indeed in most cases rsync, rdiff-backup, even scp (whatever one is most familiar with) to a local NAS, remote server, etc is usually better, at least more understandable for somebody who isn't used to the process.

[–] smileyhead@discuss.tchncs.de 1 points 8 months ago

You cannot forget some rsync flag and lose part of metadata about files.

[–] Quazatron@lemmy.world 12 points 8 months ago (2 children)

Partitioning does have benefits especially for enterprise scenarios. It allows you to specify different policies per mount point (i.e. no executables on /tmp, etc.). It prevents a runaway process from filling your hard disk with logs. It lets you keep your data separated from your OS, or have multiple OSs with the same home partition.

For home use you'll probably go with something simpler, like separated home, root and games partitions, for instance.

Nowadays you should opt for LVM volumes or BTRFS subvolumes instead of partitions as these are way more flexible should you change your mind in the future about the sizes you allocated.

[–] PumpkinEscobar@lemmy.world 2 points 8 months ago

Yeah, I really like the archinstall default btrfs layout, 1 subvolume for each of these

  └─root    254:0    0  1.8T  0 crypt /var/log
                                      /var/cache/pacman/pkg
                                      /home
                                      /.snapshots
                                      /
load more comments (1 replies)
[–] nottelling@lemmy.world 11 points 8 months ago

I'm surprised no one's mentioned the security implications. Mounting with nosuid and nodev options can undermine rootkit or privileged escalation exploits.

[–] kugmo@sh.itjust.works 6 points 8 months ago (2 children)

/ and /boot are (arguably) all you need on a single disk system

[–] lemmyvore@feddit.nl 4 points 8 months ago (3 children)

But why /boot?

I would much rather split out /home if I'm going to split anything, so it can go through a future reinstall more smoothly. With /var being a more distant second candidate, because I've been burnt on several occasions by various programs eating up all disk space somewhere under it.

[–] Laser@feddit.de 6 points 8 months ago (1 children)

If you want to be compliant to the UEFI spec, the partition holding your EFI binaries must be formatted as a file system related to FAT (see https://en.m.wikipedia.org/wiki/EFI_system_partition). This is not something you want for you system drive, so a separate partition makes sense.

[–] lemmyvore@feddit.nl 1 points 8 months ago (1 children)

Isn't EFI a separate partition? Different from /boot?

[–] pete_the_cat@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

They can be the same partition, they are for different purposes though. EFI holds the EFI binaries as the name implies, while /boot holds the initrd, kernel, and the bootloader config files.

If they are the same partition, /boot needs to be formatted as FAT32 and have EFI as a subdirectory. Otherwise they can be separate partitions, either way the partition that contains the EFI directory needs to be formatted as FAT32.

[–] SomethingBurger@jlai.lu 3 points 8 months ago (1 children)
[–] aBundleOfFerrets@sh.itjust.works 1 points 8 months ago

Its best practice to just split out /efi in that case

[–] smileyhead@discuss.tchncs.de 1 points 8 months ago (1 children)

I keep / and /home on a btrfs subvolumes, so I do not have to think about their sizes and also can do snapshots.

[–] lemmyvore@feddit.nl 1 points 8 months ago

How do btrfs snapshots work?

I use borg to take snapshots from / and /home because I can be selective (it has include and exclude patterns, like rsync). Also because it does deduplication (at file and chunk level too, saves a ton of space) and compression. And of course a big factor is that I can keep the backups somewhere else.

I've looked into zfs snapshots but they seem really limited in comparison. Good for recovering accidental deletes or changes if you catch on soon enough, but not very useful otherwise.

[–] FriendBesto@lemmy.ml 2 points 8 months ago

Unless you need to dual-boot.

[–] scratchandgame@lemmy.ml 5 points 8 months ago

Why not put everything in one big partition

https://marc.info/?l=openbsd-misc&m=154054091026039&w=3

A comment: The guy who make that video might be a troll, I reviewed his videos' titles.

And such bullshit is much more accessible in plain text form.

[–] avidamoeba@lemmy.ca 3 points 8 months ago* (last edited 8 months ago)

Partitioning (beyond what's needed to boot)? No. Logical volumes or datasets? Perhaps, but probably not for most trivial setups. Even swap is fine on a file if you need it and it simplifies disk encryption. Most of my machines run an EFI and an LVM partition. If I need a separate volume for something, I can always create it in LVM.

load more comments
view more: next ›