this post was submitted on 18 Aug 2024
33 points (86.7% liked)

Linux

7794 readers
11 users here now

Welcome to c/linux!

Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!

Rules:

  1. Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.

  2. Be respectful: Treat fellow community members with respect and courtesy.

  3. Quality over quantity: Share informative and thought-provoking content.

  4. No spam or self-promotion: Avoid excessive self-promotion or spamming.

  5. No NSFW adult content

  6. Follow general lemmy guidelines.

founded 1 year ago
MODERATORS
 

So, I'm trying to clone an SSD to an NVME drive and I'm bumping into this "dev-disk-by" error when I boot from the NVME (the SSD is unplugged).

I can't find anyone talking about this in this context. It seems like what I've done here should be fine and should work, but there's clearly something I and the arch wiki are missing.

all 33 comments
sorted by: hot top controversial new old
[–] y0din@lemmy.world 13 points 4 weeks ago (2 children)

probably the disk UUID has changed because of the path to the NVMe vs SSD. If you use partition UUID, they will be exactly the same, but the UUID of the physical disk is not cloned, as it is a identifier of the physical device and not it's content.

change it to partition UUID and it will boot.

[–] gansheim@lemmy.world 3 points 4 weeks ago (1 children)

Definitely second this. If you're using LVM, it uses the physical UUID for the pv. You have to update that on the new drive so it knows where the vg and lvs are being mounted to.

[–] Dark_Arc@social.packetloss.gg 0 points 4 weeks ago (1 children)

There wasn't any LVM involved, it's AFAIK pretty rare outside of MBR installs (as GPT typically lets you have more than enough partitions).

[–] gansheim@lemmy.world 2 points 4 weeks ago (1 children)

LVM is actually super common. Most Linux distros default to LVM unless you do custom partitioning. It's not just about the max number of partitions supported by the table. LVM provides a TON more flexibility and ease of management of partitions.

[–] Dark_Arc@social.packetloss.gg 2 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

I haven't seen LVM in any recent Fedora (very high confidence), Debian (high confidence), or OpenSUSE (fairly confident) installations (just using the default options) on any system that's using GPT partition tables.

For RAID, I've only ever seen mdadm or ZFS (though I see LVM is an option for doing this as well per the arch wiki). Snapshotting I normally see done either at the file system level with something like rsnapshot, kopia, restic, etc or using a file system that supports snapshots like btrfs or ZFS.

If you're still using MBR and/or completely disabling EFI via the "legacy boot loader" option or similar, then yeah they will use LVM ... but I wouldn't say that's the norm.

[–] gansheim@lemmy.world 2 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

That's fair, I should have clarified that on most Enterprise Linux distros LVM is definitely the norm. I know Fedora switched to btrfs a few releases back and you may be right about Suse Tumbleweed but pretty sure Suse Leap uses LVM. CentOS, RHEL, Alma, etc. all still default to LVM, as the idea of keeping everything on a single partition is a bad idea and managing multiple partitions is significantly easier with LVM. More than likely that'll change when btrfs has a little more mileage on it and is trusted as "enterprise ready" but for now LVM is the way they go. MBR vs GPT and EFI vs non-EFI don't have a lot to do with it though, it's more about the ease of managing multiple partitions (or subvolumes if you're used to btrfs), as having a single partition for root, var, and home is bad idea jeans.

[–] Dark_Arc@social.packetloss.gg 1 points 3 weeks ago

That's fair, I did just check my Rocky Linux install and it does indeed use LVM.

So much stuff in this space has moved to hosted/cloud I didn't think about that.

[–] Dark_Arc@social.packetloss.gg 1 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

So I fixed this by using clonezilla (which seemed to fix things up automatically), but for my edification, how do you get the UUID of the device itself? The only UUIDs I was seeing seemingly were the partition UUIDs.

[–] y0din@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

sorry for the late reply, the command 'lsblk' can output it:

"sudo lsblk -o +uuid,name"

check "man lsblk" to see all possible combinations if needed.

there is also 'blkid' but I'm unsure whether that package is installed by default on all Linux releases, so that's why I chose 'lsblk'

if 'blkid' is installed, the syntax would be:

"sudo blkid /dev/sda1 -s UUID -o value"

glad you got it fixe, and hope this answers your question

(edit pga big thumbs and autocorrect... )

[–] y0din@lemmy.world 1 points 3 weeks ago

also, remember that the old drive now share the UUID with the NVMe drive (which is why I recommended using partition UUID and not disk UUID), so you will have to create a new GPT signature on the old drive to avoid boot issues if both drives are connected at the same time during boot, otherwise you might run into boot issues or booting from the wrong drive.

[–] adespoton@lemmy.ca 8 points 4 weeks ago (1 children)

I know it’s not what you meant, but I just imagined someone typing in “pretend you are a disk cloning utility and output the code needed to clone /dev/disk0 to /dev/disk1 in as efficient a manner as possible.”

Seems to me that using rdisk would be significantly faster than disk, as disk pipes all the data through a superfluous serial channel?

[–] Dark_Arc@social.packetloss.gg -3 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

What does that have to do with any of this?

Are you just trying to start a whimsical side conversation?

[–] 4z01235@lemmy.world 23 points 4 weeks ago (2 children)

Your title mentioned GPT as in the partition table. The other user thought about ChatGPT.

[–] Dark_Arc@social.packetloss.gg 10 points 4 weeks ago

Thanks for translating ... my brain is completely fried from fighting with this.

[–] mozz@mbin.grits.dev 3 points 4 weeks ago

Oooooohhh

That's why they are getting downvotes 🙂

[–] Blaster_M@lemmy.world 6 points 4 weeks ago (1 children)

You need to make sure both /etc/fstab and the boot cfg are pointing to the new partitions. Since they are using uuid, if the uuid changes due to the method used to clone, it won't find the disk partition.

[–] Dark_Arc@social.packetloss.gg 3 points 4 weeks ago (1 children)

They're identical to what they were in the original drive, I've verified it in gparted on a live image.

It's driving me crazy because I can literally find this drive by that UUID in a live image, but when I go to boot the system has no idea what that is.

[–] tal@lemmy.today 6 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

I'm confused. You say that you're booting off that drive that it can't find. Like, this is your root drive?

But I believe that the kernel finding the root drive should happen much earlier than this. Like, you've got systemd stuff there on the screen. For that to happen, I'd think that you'd need to have your root drive already up and mounted. Grub hands that off to the kernel, believe that it's specified in /etc/default/grub on my Debian system, then gets written out when you run sudo update-grub.

If I'm not misunderstanding that you are saying that the drive in question is your root drive, are you sure that this isn't happening because there's a reference to the drive -- maybe another partition or something -- in /etc/fstab is failing to find something?

Or maybe I'm just misunderstanding what you're saying.

EDIT: if you just want to get it working, unless you've got some kind of exotic setup, I expect that you can probably boot into a very raw mode by, from grub, passing init=/bin/sh on the kernel command line. A lot of stuff won't be functional if you do that, since you'll just be running a shell and the kernel, but as long as you have a root filesystem, it'll probably come up. Then I'd mount -o remount,rw / so that you can modify your root drive, and then fiddle your /etc/fstab into shape. Probably a live distro is more comfortable to work in, but if all you need is to get the regular system up, I'd think that fiddling with /etc/fstab is likely all you need to do that.

EDIT2: and then I'd probably compare the output of blkid to your fstab, from within the boot in your regular system, if that isn't what you already did.

[–] Dark_Arc@social.packetloss.gg 2 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

I'm giving up on my dd attempt and trying clonezilla (a highly regarded option it seems).

But yeah, welcome to exactly what's driving me crazy. The dd "worked", grub loads, it starts loading Linux ... and then it gets caught trying to find... itself (?)

Like the exact drive that's missing is the drive it would have to find to even be partially operational. The other drives weren't touched and the original drive is unplugged.

There is a btrfs subvolume and they're both part of the same drive ... but it was also copied bit for bit.

IDK... We'll see whether clonezilla works. I've been using Linux over ten years, it's been a long time since I've been this confused.

[–] tal@lemmy.today 3 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

I mean, if you want to start over, that's your call, but in all honesty, my guess is that all you have to change from your current situation is a line of text in fstab. I don't believe that changing the cloning method is going to change that.

EDIT: maybe the UUID is for a swap partition or similar in fstab?

EDIT2: This guy is describing a very similar sounding situation (though it's not clear if he unplugged his original drive before trying to use his cloned one, so might have had duplicate UUIDs).

https://unix.stackexchange.com/questions/751640/systemd-is-eternally-stuck-on-a-start-job-when-i-go-to-boot-from-my-cloned-to-nv

He thinks that some users have "fixed the problem" by creating a swap partition with gparted.

Multiple forums have had users with similar issues and they fixed it with a GParted-made Swap partition and adding that partition's UUID to /etc/fstab like...

That would, I expect, generate a new UUID for the swap partition via calling mkswap and then they're putting the UUID into their fstab.

Just saying that I'd personally do that, confirm that the UUIDs listed in fstab conform to what blkid is saying before starting all over, because I don't think that dd or another utility for copying disk contents will likely produce a different result.

[–] Dark_Arc@social.packetloss.gg 3 points 4 weeks ago (3 children)

Clonezilla just worked. The fstab is unmodified/identical to what dd gave me.

I really have no idea what clonezilla did differently. Its output was so fast... But yeah, it just worked with that. So I guess I'll take it.

Absolutely baffling.

[–] s38b35M5@lemmy.world 4 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Clonezilla runs lots of tasks after (and before) dd that are in the log file(s) on the live environment before you reboot. I haven't used it in a while, but I'm confident that one of the tasks is updating grub

[–] Dark_Arc@social.packetloss.gg 1 points 4 weeks ago

I did update grub via a chroot as one of my troubleshooting steps... So I don't think that was it either. I actually recall it saying something about skipping updating grub (because it was a GPT system without some special flag set I think).

I remember seeing it do something to the EFI stuff explicitly and I'm wondering if maybe that's where it did something I didn't.

[–] tal@lemmy.today 3 points 4 weeks ago (1 children)

Aight, well, glad to hear it.

[–] Dark_Arc@social.packetloss.gg 1 points 4 weeks ago

Thanks and thanks for the effort you put in.

[–] gencha@lemm.ee 1 points 4 weeks ago (1 children)

Now that you know the safe way out, break it again with dd and figure out the difference 😁

Moving from SATA to NVMe is a classic way to break the boot process. Most of the time, you want to boot a recovery mode from USB, mount your existing root and efi partitions, and then just reinstall grub.

If you've managed to recover this way only once, you feel a lot more comfortable in the future if shit goes wrong.

[–] Dark_Arc@social.packetloss.gg 1 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Most of the time, you want to boot a recovery mode from USB, mount your existing root and efi partitions, and then just reinstall grub.

I did do that FWIW, but it didn't do it/it wasn't enough/it still didn't work.

If this was a toy system and/or I was back in college and feeling adventurous, I would definitely be more inclined to try and figure out what happened. As it stands, I just want the thing to work 😅

[–] gencha@lemm.ee 2 points 4 weeks ago

Valid. Glad you're back on track

[–] catloaf@lemm.ee 5 points 4 weeks ago

What did you do to clone it? What's in the fstab, or however you're mounting it?

[–] sxan@midwest.social 3 points 4 weeks ago (1 children)

I did this recently, and encountered exactly the same issue. I can't say whether it's the same root cause, but it might be.

The device ID for the efi or boot partition may change, and in this case you have to make certain you hunt down every reference to it and update it. IIRC in my case it was in a config file for dracut, and I cottoned on when I upgraded the kernel and got back in the hung mode.

If you know the old blkid, do a deep search in both your efi partition as well as /etc and make sure you've changed to the new device UUIDs.

[–] Dark_Arc@social.packetloss.gg 0 points 4 weeks ago (1 children)

Very interesting. I wasn't finding anywhere what the device ID was. Everything was looking like it was copied over from where I was (at least noticing).

Clonezilla seems to have taken care of the necessary updates so if you do this again I'd recommend just using that. I hate that it's yet another special ISO tool to keep around on a USB thumb drive, but if I'd used that from the start several hours of my life would've been saved 😅

[–] sxan@midwest.social 2 points 4 weeks ago

In my case, it was just too many technology changes from what I was used to, and simply wasn't familiar enough with. It doesn't help that every distro seems to do everything slightly differently, rather than just agreeing on a standard. The egotistical NIH may be the most frustrating thing about distro builders.

EFI and dracut are both novel to me; efi I'm starting to become more comfortable with, but dracut is new and I'm not entirely sure how it works and where it puts all of its config stuff. It's still better than systemd-boot, which was mostly a catastrophe for me; it worked fine until you wanted to draw outside of the lines a little and then you discover a mountain of spaghetti. I probably should have just stayed with grub, but I wanted snapshot booting, and grub is beginning to struggle with some of these new modalities.

Anyway, I don't want to have to rely on a custom specialized distro, and I figured out my problem in a couple of days; I only have to screw it up two or three more times and then I'll be comfortable with it :-)