Categories
Computers

Moving Linux to a New Hard Drive- a HowTo

About a week ago, I discovered that my primary hard drive was starting to enter its death throws. I was consistently receiving I/O errors and in one case, I had to reboot the system and perform an fsck on the root file system to repair the errors. Fortunately, the system was stable enough for long enough that I was able to come up with a replacement drive.

Yesterday, I set about migrating everything to the new hard drive. FWIW, it’s a 500GB, 3Gb SATA Barracuda from Seagate. Searching around, I came up with 2 articles (here and here) to guide me a bit. The hard part ended up being getting GRUB2 installed properly.

I’m happy to report that I’ve got everything running. After the jump is the process I used. The advantage for this process is that everything is done from the shell. There is no need to separately run an install program on the new drive or to figure out esoteric grub commands. If that sounds good to you, then let’s proceed…

I’m assuming that the drive is installed properly at this point … 😉

To start, for my purposes, I created 3 partitions on the drive:

  1. A boot partition
  2. A root partition
  3. A swap partition

I used fdisk for this part. I divvied it up into a ~100MB boot partition, a 4GB swap partition, and the remainder to the root partition. Remember to set the bootable flag on the boot partition.

Next up, I used mkfs to format the partition appropriately. A word to the wise: these are powerful commands with no conscience. They will wipe an existing partition without a seconds hesitation. Make sure you invoke them properly and on the intended devices. I just create and ext2 on the boot partition and ext4 on the root partition. (Do I need to say I created a swapfs on the swap partition?)

With the drive ready for data, proceed as follows (note that all of this must be done as root):

  1. Mount the new boot partition and copy the contents of /boot to it. I just used cp -rpf /boot/* /mnt/newboot. (Strictly speaking, that isn’t true. That’s what I ended up doing. Initially I copied the boot directory over which wasn’t quite correct since I’m giving /boot it’s own partition. The partition ultimately gets mounted to /boot, which exists on the root filesystem. Got that? Good.)

  2. Unmount the new boot partition and now mount the root partition. (FWIW- the device nodes were /dev/sdb1 and /dev/sdb3, respectively, while I was working. The odds of them being the same in other cases are somewhere around “not good.”)

  3. On the new root partition, create the following mount points: boot, proc, dev, sys, and tmp. To be safe, you can also copy the contents of the /dev and /tmp directories to the new partition, again using copy
    -rpf
    .

  4. Now, copy the remaining directories. Again, I just used e.g. copy -rpf /root/ /mnt/newdrive/. Be careful to account for other mount points within your system. For instance, my /usr/local directory has its own drive. So I need to copy everything from /usr except local. But I did need to create the mount point using mkdir /mnt/newdrive/usr/local. I had an identical situation with my /home directory, which too has it’s own dedicated drive.

  5. Once everything is copied over, the next thing is to setup newdrive/etc/fstab with the proper UUID’s for the new drive. The blkid command will print out the UUID’s for all the partitions on the system. Edit the fstab on the new drive accordingly, i.e., replace the UUID’s for the old, failing drive with the UUID’s for the new one.

  6. We now have everything we need to be able to perform a chroot. The chroot will look exactly like the new system, and we’ll be able to perform a grub-install to get the bootloader properly installed. The instructions for setting up for the chroot are in the 2nd link above, but I’ll reproduce them here for convenience (use the following sequence of commands):

    root:/$ mount /dev/sdb1 /mnt/newdrive/boot  #<--- this mounts the new boot 
                                                #     partition like it will be
                                                #     in the new system
    root:/$ mount -o bind /dev /mnt/newdrive/dev
    root:/$ mount -t proc none /mnt/newdrive/proc
    root:/$ mount -t sysfs none /mnt/newdrive/sys
    root:/$ chroot /mnt/newdrive
    
  7. Execute grub-install /dev/sdb to install grub to the new drive so it is bootable. (Substitute the appropriate /dev device node for your system.)

With that, the chroot can be exited, the dummy system unmounted and then shutdown the system. For my purposes, I unplugged the old drive completely (data and power) and plugged in the new drive in it’s place. I didn’t physically remove the old drive, just in case. Turned out everything worked out splendidly and I’m now running with the new hard drive in place.

I did all of this while my system was live. I suppose it might have been safer to perform the steps from a maintenance mode or some other minimally running system. That would likely change some of the steps as it might be necessary to mount some drives for copying purposes. But the basic process of format, create mount points, copy data, adjust UUID’s, chroot and finally grub-install within the chroot should hold up.

Leave a Reply

Your email address will not be published. Required fields are marked *