Skip to Content

Seagate Personal Cloud (1/2)

This post is a rambling information dump of my journey of investigation into the innards of a Seagate Personal Cloud.

Background

I have been toying with the idea of buying a NAS, but was not keen on the idea of being stuck with software that was not to my liking. I wanted something I could choose to install Debian on, so I have full control of it.

I came across the instructions to install Debian on Seagate Personal Cloud, so I bought a 2nd hand Seagate Personal Cloud (4TB). The purchase took place in March 2018, and some preliminary investigation was done then. I am finally dusting off the project.

The Debian instructions outline how to get access to the U-Boot command line, and launch the Debian installer. At this point, it should be a fairly standard Debian install.

What made me pause the installation was the instructions include the following paragraph:

The Personal Cloud devices come with the Seagate NAS firmware pre-installed and there is no easy way to re-install the Seagate NAS firmware after you install Debian. We therefore suggest you create a disk image before you install Debian.

I wanted the capability to be able to revert to stock firmware, even if I never used it. It also suggests that if the disk dies, you can’t simply drop in a replacement.

The classic way to create a recovery image would be to extract the hard disk from the case, and use Clonezilla to image it. I abandoned that idea after finding a YouTube video (Russian) that showed the disk removal is possible, but would involve peeling off tape. I would investigate less intrusive methods first.

I decided that the approach I would use would be to launch the Debian Installer, and drop into a shell for investigation.


U-Boot

The first step was to connect to the U-Boot console using clunc as described in the ‘U-Boot access’ section of the Debian documentation.

I did get side-tracked creating a Python implementation of clunc after working out why the lacie-nas.org version no longer worked for me out of the box.

The first piece of information to be captured was the current U-Boot configuration. This will give clues about the device boots, and details that would need to be restored if uninstalling Debian.

I ran printenv to view the current variables. I knew from the installation instructions that bootcmd was an important setting, so I started there.

Here are some highlights from the full output.

bootcmd=run nexus_boot
nexus_boot=test -n ${resetFlag_env} && forced=${resetFlag_env}; for disk in ${disk_list}; do run disk_expand; echo Booting Nexus OS from disk ${disk}...; countfail=0; bootfail=0; user=0; if run nexus_load_flags; then run nexus_rescue_countfail && countfail=1; run nexus_set_root; kern_img=/boot/uImage; dtb_img=/boot/target.dtb; test -n $board_dtb && dtb_img=/boot/$board_dtb; run nexus_load_image || bootfail=1; run nexus_rescue_user && user=1; if run nexus_can_boot; then run nexus_inc_count; run nexus_save_flags; run nexus_bootargs; bootm ${kern_addr} - ${dtb_addr}; bootfail=1; fi; run nexus_boot_rescue; fi; done; bootfail=1; countfail=; user=;run nexus_panic
nexus_boot_part=2
nexus_boot_rescue=echo Booting Nexus rescue system from disk ${disk}...; root_part=2; kern_img=/rescue/uImage; dtb_img=/rescue/target.dtb; test -n $board_dtb && dtb_img=/rescue/$board_dtb; if run nexus_load_image; then if test -n ${panic}; then run nexus_uuid; setenv bootargs ${bootargs} root=UUID=${root_uuid}; else run nexus_bootargs; fi; run nexus_rescue_bootargs; bootm ${kern_addr} - ${dtb_addr}; fi
nexus_bootargs=run nexus_uuid; setenv bootargs ${console} boot=UUID=${boot_uuid} root=UUID=${root_uuid}
nexus_can_boot=test ${bootfail} -eq 0 -a ${countfail} -eq 0 -a ${forced} -eq 0 -a ${user} -eq 0
nexus_inc_count=if test ${boot_count} -eq 0; then setenv boot_count 1; elif test ${boot_count} -eq 1; then setenv boot_count 2; else setenv boot_count ${nexus_max_count}; fi
nexus_load_flags=ext2get ${disk_iface} ${disk_num}:${nexus_nv_part} /ubootenv boot_count saved_entry
nexus_load_image=ext2load ${disk_iface} ${disk_num}:${root_part} ${dtb_addr} ${dtb_img}; ext2load ${disk_iface} ${disk_num}:${root_part} ${kern_addr} ${kern_img} && iminfo ${kern_addr}
nexus_max_count=3
nexus_nv_part=3
nexus_panic=panic=1; for disk in ${disk_list}; do run disk_expand; run nexus_boot_rescue; done
nexus_rescue_bootargs=test -n ${bootfail} && setenv bootargs bootfail=${bootfail} ${bootargs}; test -n ${countfail} && setenv bootargs countfail=${countfail} ${bootargs}; test -n ${user} && setenv bootargs user=${user} ${bootargs}; test -n ${forced} && setenv bootargs forced=${forced} ${bootargs}
nexus_rescue_countfail=test ${boot_count} -ge ${nexus_max_count}
nexus_rescue_user=ext2get ${disk_iface} ${disk_num}:3 /ubootenv rescue && test ${rescue} -eq 1
nexus_save_flags=ext2set ${disk_iface} ${disk_num}:${nexus_nv_part} /ubootenv boot_count ${boot_count}
nexus_set_root=if test $saved_entry -eq 1; then root_part=5; else root_part=4; fi
nexus_uuid=ext2uuid ${disk_iface} ${disk_num}:${nexus_boot_part}; boot_uuid=${uuid}; ext2uuid ${disk_iface} ${disk_num}:${root_part}; root_uuid=${uuid}

From this I deduced the following:

  • performs rescue boot after 3 failed normal boot attempts
  • partition #2 is boot fs for both normal and rescue boot
  • partition #3 is used to hold non-volatile settings
    • boot_count - running count of boot attempts
    • saved_entry - select partition to use for root fs
    • rescue - perhaps user initiated reboot into rescue?
  • partition #4 or partition #5 is used as root fs for normal boot (dependent on “saved_entry”)
  • I suspect that “resetFlag_env” is set when the NAS boots with reset held in.

Booting Linux

The next step is to follow the instructions in ‘Loading Debian installer’. The installer boots with a different IP address (due to using a random MAC address). I found the IP used, and connected using SSH.

$ ssh installer@192.168.1.101

I saw the familiar Debian installer dialog with the the message “This is the network console for the Debian installer”. I could see from the the menu bar at the top that it uses Screen which means it is possible to switch between virtual terminals without launching a new SSH session (and installer).

Initially I just tried switching to an interactive shell, but I discovered a this point the hard disk has not been detected, and there are insufficient tools for the purposes of investigation. So I chose to use the Debian installer, but abandon installation before any updates were made.

I selected “Start installer (expert mode)”, configured the mirror, and selected ‘buster’ as the Debian version. I then selected “Download installer components”, and selected the following components:

  • fdisk-udeb
  • lvmcfg
  • mdcfg

After that completed, I skipped down to “Detect disks” (leaving ‘usb-storage’ selected). It is now time for investigation, so I switched to one of the shell terminals using “Ctrl-a 2” (I could also have selected “Execute a shell”).


Partitions (I)

I used ‘fdisk’ to check the disk had been detected, and what the partitions were:

$ fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Device       Start        End    Sectors  Size Type
/dev/sda1     2048       4095       2048    1M BIOS boot
/dev/sda2     4096     397311     393216  192M Linux filesystem
/dev/sda3   397312     399359       2048    1M Linux filesystem
/dev/sda4   399360    3545087    3145728  1.5G Linux RAID
/dev/sda5  3545088    6690815    3145728  1.5G Linux RAID
/dev/sda6  6690816    8787967    2097152    1G Linux RAID
/dev/sda7  8787968    9836543    1048576  512M Linux RAID
/dev/sda8  9836544 7814035455 7804198912  3.6T Linux RAID

We see that in addition to the partitions in the U-Boot configuration, we have partitions #1, #6, #7 and #8.

The disk uses GPT, and has a standard protective MBR partition.

$ fdisk -l -t dos
...
Device     Boot Start        End    Sectors Size Id Type
/dev/sda1           1 4294967295 4294967295   2T ee GPT

To examine the contents of a partition, I would create a directory as a mount-point, and then mount the filesystem read-only. For example:

~ # mkdir -p /mnt/sda2
~ # mount -r -t ext4 /dev/sda2 /mnt/sda2

I started by looking at the contents of partition ‘sda2’, which appeared to be the boot and rescue partition.

~ # ls -l /mnt/sda2
drwx------    2 root     root         12288 Nov 28  2014 lost+found
drwxr-xr-x    2 root     root          1024 Jul 13  2017 rescue
~ # ls -l /mnt/sda2/rescue
-rw-r--r--    1 root     root         11059 Oct 10  2014 armada-370-n090102.dtb
-rw-r--r--    1 root     root         11051 Oct 10  2014 armada-370-n090103.dtb
-rw-r--r--    1 root     root         12101 Oct 10  2014 armada-370-n090201.dtb
-rw-r--r--    1 root     root         11231 Oct 10  2014 armada-370-n090203.dtb
-rw-r--r--    1 root     root         13312 Oct 10  2014 armada-370-n090401.dtb
-rw-r--r--    1 root     root           669 Oct 10  2014 description.xml
-rw-r--r--    1 root     root           379 Jul 13  2017 nexus.map
-rw-r--r--    1 root     root      90198740 Jul 13  2017 product_image.tar.lzma
-rw-r--r--    1 root     root         11051 Oct 10  2014 target.dtb
lrwxrwxrwx    1 root     root            17 Nov 28  2014 uImage -> uImage_1.5.16-arm
-rw-r--r--    1 root     root      16113832 Oct 10  2014 uImage_1.5.16-arm
-rw-r--r--    1 root     root            48 Jul 13  2017 versions

This had the expected boot files (as referenced from U-Boot configuration), but also had the files description.xml, nexus.map, product.image.tar.lzma, and versions.

Three of these were small enough and contained text so I could view. The contents of the compressed tar file would have to wait.

The file descriptions.xml contains information consistent with the hardware. The hardware id matches the value in U-Boot setting board_dtb, and the SHA1 checksum matches the uImage file.

<rescue>
    <xml_version>1.0</xml_version>
    <device>
        <hardware>
            <arch>arm</arch>
            <subarch>armada</subarch>
            <id>n090103</id>
            <revision>1.0</revision>
        </hardware>
        <product_id>cumulus</product_id>
        <product_name>'Cumulus'</product_name>
        <vendor_id></vendor_id>
        <vendor_name>'Seagate'</vendor_name>
        <vendor_custom_id></vendor_custom_id>
    </device>
    <release_date>2014-10-10</release_date>
    <main>
        <rescue_version>1.5.16</rescue_version>
        <file>uImage</file>
        <sha1>a386942e67a508b74ac6a2b77a0043449ab7904c</sha1>
    </main>
</rescue>

The file nexus.map is a text representation of the partition table. This gives us clues about the newly discovered partitions. We have a BIOS boot partiton in sda1, temporary files in sda6, swap in sda7, and data in sda8.

# npart name        size(MiB) type  fstype     fsoption
  1     grub_core   1         ef02 
  2     boot_rescue 192       8300  ext2
  3     nv_data     1         8300  ext2
  4     root_1      1536      fd00  ext2
  5     root_2      1536      fd00  ext2
  6     var         1024      fd00  ext3
  7     swap        512       fd00  linux-swap
  8     user_data   0         fd00

The file ‘versions’ looks straightforward.

SOFTWARE_VERSION=4.3.15.1
RESCUE_VERSION=1.5.16

The partition ‘sda3’ is expected to hold non-volatile information used by U-Boot.

~ # ls -l /mnt/sda3
drwx------    2 root     root         12288 Nov 28  2014 lost+found
-rw-r--r--    1 root     root            32 Dec 27 11:42 ubootenv
-rw-r--r--    1 root     root           100 Nov 28  2014 uuid.cfg

Looking at the ubootenv file, I can see the strings boot_count and saved_entry which match references from the printenv above.

In addition, the file uuid.cfg holds the UUIDs for two file-systems. I suspect that these relate to the root file systems on sda4 and sda5.

rootfs1_uuid=c195e11f-ef59-4e64-984a-c18ab1ed7d0a
rootfs2_uuid=722a8e00-5e51-4f95-9571-b8dc53128040

RAID on a single disk?

It was time to investigate the partitions marked as RAID.

~ # mdadm --assemble --scan
mdadm: /dev/md/vg:8 has been started with 1 drive.
mdadm: /dev/md/(none):7 has been started with 1 drive.
mdadm: /dev/md/(none):6 has been started with 1 drive.
mdadm: /dev/md/(none):5 has been started with 1 drive.
mdadm: /dev/md/(none):4 has been started with 1 drive.

Following this, the contents of /proc/mdstat was:

Personalities : [raid1] 
md123 : active raid1 sda4[0]
      1572852 blocks super 1.0 [1/1] [U]
      
md124 : active raid1 sda5[0]
      1572852 blocks super 1.0 [1/1] [U]
      
md125 : active raid1 sda6[0]
      1048564 blocks super 1.0 [1/1] [U]
      
md126 : active raid1 sda7[0]
      524276 blocks super 1.0 [1/1] [U]
      
md127 : active raid1 sda8[0]
      3902099320 blocks super 1.0 [1/1] [U]
      
unused devices: <none>

So each of the partitions is a RAID-1 mirror containing only a single drive. I wasn’t even aware you could do that.

The Seagate Personal Cloud also comes in a 2-bay version, so I suspect the use of RAID is to allow the same firmware to be used in both variants.


Partitions (II)

With the RAID partitions activated, it is possible to dig a little deeper.

~ # blkid | grep /md
/dev/md127: UUID="S1IcAG-J4rq-lNuG-j9pO-2qdS-0Kxb-k3Fb1F" TYPE="LVM2_member"
/dev/md126: TYPE="swap"
/dev/md125: UUID="5c5ed39e-8759-407b-921c-3000e9cf35d3" SEC_TYPE="ext2" TYPE="ext3"
/dev/md124: UUID="722a8e00-5e51-4f95-9571-b8dc53128040" TYPE="ext2"
/dev/md123: UUID="c195e11f-ef59-4e64-984a-c18ab1ed7d0a" TYPE="ext2"

We have further confirmation from the UUID that md123 (sda4) and md124 (sda5) are the two 1.5 GiB root partitions, and that md125 (sda6) is the 512 MiB swap partition.

Looking at the root partitions:

~ # ls -l /mnt/md123
drwxr-xr-x    2 root     root          4096 Mar 24  2018 bin
drwxr-xr-x    2 root     root          4096 Mar 24  2018 boot
drwxrwxr-x    2 root     root          4096 Mar 24  2018 dev
drwxr-xr-x   57 root     root          4096 Mar 24  2018 etc
drwxr-xr-x    2 root     100           4096 Nov 17  2017 home
drwxr-xr-x    4 root     root          4096 Mar 24  2018 lacie
drwxr-xr-x    6 root     root          4096 Mar 24  2018 lib
lrwxrwxrwx    1 root     root            11 Mar 24  2018 linuxrc -> bin/busybox
drwx------    2 root     root         16384 Mar 24  2018 lost+found
drwxr-xr-x    2 root     root          4096 Nov 17  2017 media
drwxr-xr-x    2 root     root          4096 Nov 17  2017 mnt
drwxr-xr-x    2 root     root          4096 Nov 17  2017 opt
drwxr-xr-x    2 root     root          4096 Nov 17  2017 proc
drwxr-x---    2 root     root          4096 Mar 24  2018 root
lrwxrwxrwx    1 root     root             8 Mar 24  2018 run -> /var/run
drwxr-xr-x    2 root     root          4096 Mar 24  2018 rw
drwxr-xr-x    2 root     root          4096 Mar 24  2018 sbin
drwxr-xr-x    2 root     root          4096 Mar 24  2018 shares
drwxr-xr-x    2 root     root          4096 Nov 17  2017 sys
drwxrwxrwt    2 root     root          4096 Mar 24  2018 tmp
drwxr-xr-x   11 root     root          4096 Mar 24  2018 usr
drwxr-xr-x   16 root     root          4096 Mar 24  2018 var
drwxr-xr-x   12 root     root          4096 Mar 24  2018 www
~ # ls -l /mnt/md124
drwxr-xr-x    2 root     root          4096 Dec 27 11:22 bin
drwxr-xr-x    2 root     root          4096 Dec 27 11:22 boot
drwxrwxr-x    2 root     root          4096 Dec 27 11:22 dev
drwxr-xr-x   59 root     root          4096 Dec 27 11:22 etc
drwxr-xr-x    2 root     100           4096 Jan 17  2019 home
drwxr-xr-x    4 root     root          4096 Dec 27 11:22 lacie
drwxr-xr-x    7 root     root          4096 Dec 27 11:22 lib
lrwxrwxrwx    1 root     root            11 Dec 27 11:22 linuxrc -> bin/busybox
drwx------    2 root     root         16384 Dec 27 11:21 lost+found
drwxr-xr-x    2 root     root          4096 Jan 17  2019 media
drwxr-xr-x    2 root     root          4096 Jan 17  2019 mnt
drwxr-xr-x    2 root     root          4096 Jan 17  2019 optkkk
drwxr-xr-x    2 root     root          4096 Jan 17  2019 proc
drwxr-x---    2 root     root          4096 Dec 27 11:22 root
lrwxrwxrwx    1 root     root             8 Dec 27 11:22 run -> /var/run
drwxr-xr-x    2 root     root          4096 Dec 27 11:23 rw
drwxr-xr-x    2 root     root          4096 Dec 27 11:22 sbin
drwxr-xr-x    2 root     root          4096 Dec 27 11:23 shares
drwxr-xr-x    2 root     root          4096 Jan 17  2019 sys
drwxrwxrwt    2 root     root          4096 Dec 27 11:22 tmp
drwxr-xr-x   11 root     root          4096 Dec 27 11:22 usr
drwxr-xr-x   16 root     root          4096 Dec 27 11:22 var
drwxr-xr-x   12 root     root          4096 Dec 27 11:22 www

The contents of md124 used to have timestamps from 2000, so it is clear that the recent update has switched the root partition to md124.

Looking at the 1024 MiB ‘var’ partition md125 (sda5) I see there are two sub-directories, 0 and 1, which judging by the contents of var/log in them, correspond to the value of saved_entry that map to the root partition.

~ # ls -l /mnt/md125
drwxr-xr-x    5 root     root          4096 Mar 24  2018 0
drwxr-xr-x    5 root     root          4096 Dec 27 11:22 1
drwxr-xr-x    2 root     root          4096 Oct 28  2017 default_apps
drwx------    2 root     root         16384 Nov 28  2014 lost+found
drwxr-xr-x    2 root     root          4096 Dec 27 11:22 tmp

The 0/1 directories contain etc, root and var sub-directories. The var directory contains standard layout for /var. The root directory looks like a home directory, so is probably mounted as /root. The etc directory looks to be a copy of /etc, with a few recent timestamps. I expect this is the working copy, and is mounted on `/etc’.


Partitions (III)

The final partition md123 (sda4) is a 3.6 TiB LVM member, which corresponds to the size of the data volume reported.

To activate the lvmcfg component, I switched back to the installer, and selected “Partition Disks”. I then used “Go Back” to return to the main menu without making changes.

I can now see the LVM volume group.

~ # vgs
  VG #PV #LV #SN Attr   VSize VFree
  vg   1   1   0 wz--n- 3.63t    0
~ # blkid | grep mapper
/dev/mapper/vg-lv: UUID="a5e35600-f588-4e42-8983-5f10a4947a2c" TYPE="ext4"
~ # ls -l /mnt/vg-lv
drwxr-xr-x    2 root     root          4096 Dec 27  2020 afp_db
drwxr-xr-x    2 root     root          4096 Jan  1  2010 autoupdate
drwx------    2 root     root         16384 Jan  1  2010 lost+found
drwxr-xr-x    4 root     root          4096 Mar 24  2018 rainbow
drwxr-xr-x    2 root     root          4096 Jan  1  2010 reserved
drwxr-xr-x    4 root     root          4096 Mar 24  2018 shares
drwxr-xr-x    2 root     root          4096 Dec 27  2020 tmp
drwxr-xr-x    2 root     root          4096 Jan  1  2010 torrent_dir
drwxr-xr-x    5 root     root          4096 Mar 24  2018 var

It appears that although the data partition uses LVM, there is only a single volume, and it spans the entire space. I’m concluding the content location, and access are managed by Personal Cloud.


In Part 2 I’ll pull data off the device, where I can use a full toolset to inspect them. I’ll also create disk images for potential reinstallation of the Seagate NAS firmware.