I finally have the budget to build my first NAS and upgrade my desktop PC. I have used Linux for quite some time, but am far from an expert.

One of the steps is to move my M.2 NVME system drive (1TB) from my desktop to my NAS. I want to replace it with a bigger NVME drive (2TB). My current motherboard only has a single M.2 slot, that’s why I bought a M.2 enclosure.

My goal is to put my new drive into the enclosure, clone my whole system disk onto it and then replace the old drive. At first I found several posts about using clonezilla to clone the whole drive, but some posts mentioned it not working well with btrfs (/ and /home subvolume), which is the bulk of my drive.

I have some ideas how I might to pull it off. My preliminary idea is:

  1. clone my boot partition with clonezilla
  2. use btrfs-clone or moving my butter to transfer the btrfs partition
  3. resize the partitions with gparted (and add swap?)

The two aspects I’m uncertain about are:

  1. UUIDs
  2. fstab

I plan to replace the old drive, so the system will not have two drives with the same UUID. If the method results in a new UUID I need to edit fstab.

As you can see I’m not sure how to proceed. Maybe I can just use clonezilla or dd to clone my whole drive? If someone has experience with such a switch or is just a lot for familiar with the procedures, I would love some tips and insight.

Thanks for reading.

////////////////////////////////////////////////////////////////////////////////////////////////////////////

EDIT: Thinking about how to do it, might have actually taken longer than the procedure itself. For anyone in a similar situation, I was able to replace the drive with these steps:

  1. clone the whole drive (new drive has a bigger capacity) with clonezilla
  2. physically switch the drives
  3. boot into a live medium and resized the btrfs partition on the new drive with gparted
  4. boot into the main system and adjust the filesystem size with sudo btrfs filesystem resize max /

With two NVME drives (even though one was in a USB M.2 enclosure) everything took about 30 minutes. About 300 gigs of data were transferred. I haven’t found any problems with the btrfs partition thus far. Using dd like others recommended might work as well, but I didn’t try that option.

  • Ooops@feddit.org
    link
    fedilink
    arrow-up
    10
    ·
    3 months ago

    When you say system drive this will also have your efi system partition (usually FAT-formated as that’s the only standard all UEFI implementations support), maybe also a swap partition (if not using a swap file instead) etc… so it’s not just copiying the btrfs partition your system sits on.

    Yes clonezilla will keep the same UUID when cloning (and I assume your fstab properly uses UUIDs to identify drivees). In fact clonezilla uses different tools depending on filesystem and data… on the lowest level (so for example on unlocked encrypted data it can’t handle otherwise) clonezilla is really just using dd to clone everything. So cloning your disk with clonezilla, then later expanding the btrfs partition to use up the free space works is an option

    But on the other hand just creating a few new partitions, then copying all data might be faster. And editing /etc/fstab with the new UUIDs while keeping everything else is no rocket science either.

    The best thing: Just pick a method and do it. It’s not like you can screw up it up as long if your are not stupid and accidently clone your empty new drive to your old one instead…

    • just_another_person@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      3 months ago

      Yeah, just clone to a new drive and plug it in where the old one was. If it works, it works. If not, you can boot a live distro and fix your fstab from there.

  • TheOubliette@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    3 months ago

    I would recommend using this as an opportunity to build out and use a backups system. Whenever I get a new laptop, for example, I just make a(nother) backup on the old laptop and restore whatever I want to the new one. If there are any files I want that are normally excluded from backups, I either tweak my rules to include those files/put them in a different directory and repeat the process or just make a new manual external backup copy temporarily.

    If you have good backups then your new drive can be populated from them after creating new partitions. Optionally, you can also take this opportunity to reinstall the OS, which I personally prefer to do because it tends to clean up cruft.

    Also, if you go this route, your data on your old drive is 100% intact throughout the process. You can verify and re-verify that all the files you want are backed up + restored properly before finally formatting the old drive for use in the NAS.

  • boredsquirrel@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    3 months ago

    Clonezilla can clone BTRFS without issues

    Afterwards on the system use sudo btrfs filesystem resize max / to make it use that space. Maybe add a balance.

  • rotopenguin@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 months ago

    Do you have pci-e slots? An nvme to pcie card is cheap - it’s pretty much just passing from one connector shape to another.

    • minimalfootprintOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Do you have pci-e slots?

      I had to decide between a M.2 enclosure and a PCIe card. Since I plan to build a new system (with more M.2 slots) I will have more slots in the future. And maybe I will not like the M.2 enclosure and return it. wink

  • pe1uca@lemmy.pe1uca.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    3 months ago

    I had a similar case.
    My minipc has a microSD card slot and I figured if it could be done for a RPI, why not for a mini PC? :P

    After a few months I bought a new m2nvme but I didn’t want to start from scratch (maybe I should’ve looked into nix?)
    So what I did was sudo dd if=/dev/sda of=/dev/sdc bs=1024k status=progress
    And that worked perfectly!

    Things to note:

    • both drives need to be unmounted, so you need a live OS or another machine.
    • The new drive will have the same exact partitions, which means the same size, so you need to expand them after the copy.
    • PS: this was for a drive with ext4 partitions, but in theory dd works with the bytes so it shouldn’t be an issue what fs you use.
    • data1701d (He/Him)@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      This is my favorite strat. Back in 2022, I used this to move my install from a cheap 256GB SSD I had got to try Linux to my main 1 TB NVMe (which I had recently wiped of Windows). This install is still up and running today, granted it was ext4, but really, a dd clone shouldn’t prove a problem for any filesystem.

  • tychosmoose@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    3 months ago

    I did this recently and just used dd, parted and ‘btrfs filesystem resize’. UUIDs and fstab don’t need to be changed if you do this. It’s got to be done offline, so you’ll need to boot to a Live USB. I installed the new SSD in the m.2 slot and put the old one in an enclosure. You don’t need to mount the old SSD filesystem, which is good. Just do something like:

    dd if=/dev/sdb of=/dev/nvme0n1 bs=65536 status=progress

    Where sdb is the old drive in the USB enclosure and nvme0n1 is the new SSD. Replace those with the actuals you see in lsblk. Next resize the root partition with parted or gparted, leaving space at the end if you want a separate /home or have a swap partition there or something.

    Once the partition is larger, mount the partition and use the btrfs command to resize the filesystem. Something like:

    btrfs filesystem resize max /mnt

  • bastion@feddit.nl
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 months ago

    If you’re feeling adventurous:

    • You can use a thumb drive to boot.
    • Verify the device path for your normal boot disk and for your new drive using gnome disks or similar. In this example I’ll call them /dev/olddisk0n1 and /dev/newdisksda
    • really, really don’t mix up the in file and out file. In file (if) is the source. Out file (of) is the destination.
    • sudo dd if=/dev/olddisk0n1 of=/dev/newdisksda bs=128M
    • or, of you want a progress indicator: sudo pv /dev/olddisk0n1 > /dev/newdisksda
    • wait a long time

    Not that this is the recommended method if you’re new to the terminal, but it’s totally viable if you have limited tools or are comfortable on the command prompt.

    Unless you’re using three new disk on the same system, you don’t have to worry about UUIDS, though they will be identical on both drives.

    Your system is likely using UUIDs in fstab. If so, you don’t have to worry about fstab. If not, there’s still a damned good chance you won’t have to worry about fstab.

    To be sure, check fstab and make sure it’s using UUIDs. If it’s not, follow a tutorial for switching fstab over to using UUIDs.

  • The Doctor@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    It would probably be more reliable to partition and format the new drive manually and use rsync to copy everything over. Updating /etc/fstab with the new UUIDs isn’t a big deal (though you can also manually specify the partition UUIDs at time of format - mkfs.btrfs --uuid ...) (you didn’t say what file system your /boot partition was using, so I don’t want to guess).

    • koper@feddit.nl
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      With this approach you would lose the subvolume structure and deduplication if I’m not mistaken.

    • Ooops@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      3 months ago

      you didn’t say what file system your /boot partition was using, so I don’t want to guess

      It’s actually easy to guess. There is exactly one filesystem UEFI has to support by its specification, everything else is optional… so unless you produce for Apple -because they demand apfs support for their hardware- no vendor actually cares to implement anything but FAT.

  • Sickos [they/them, it/its]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    Personally, if the NAS is up and running, I’d migrate the home directory and anything else important from the desktop to that, and intend to network host those folders; set aside the 1TB, install the 2 TB, and do a fresh install and see if I can still get to everything happily.

    Alternatively–if you want to preserve stuff locally–new drive in an enclosure, attach to desktop, boot from an install USB, fresh install to 2TB, reboot from 2TB, mount 1TB, migrate data, install 2TB. I don’t think there should be a UUID problem doing that, but even if there was you could still boot from the install stick and try manually fix it

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    3 months ago

    Is your system drive really that: just a system drive? Then you’d better install it from scratch and have a clean, shiny and new system.

    Backup a few settings maybe. Or maybe not.

    • minimalfootprintOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      Then you’d better install it from scratch and have a clean, shiny and new system.

      You know how it is, I just got my system right. Of course lots of settings can just be duplicated, but I would prefer not to set up some systemd services, cron jobs, etc. again.