• 1 Post
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • I would heavily suggest not doing this. HDDs are significantly more reliable than flash storage when it comes to long-term, power-off data retention. Period. There’s a relatively little-known fact about SSDs and flash storage where they aren’t actually rated to sit around with data on them for all that long. The voltages stored inside of them degrade and the data is slowly lost over time if they aren’t powered on. The enterprise SSDs that I work on are rated for 3 months - as in, set it on a shelf for three months, and after that, if you don’t power it on, it isn’t guaranteed that all of your data will still be there. And this is talking about ultra-redundant, enterprise SAS SSDs. MicroSDs don’t have any of that redundancy. (And yes - this implies that setting a bunch of important flash drives in a safe for ten years is not a great idea. That is true! It’s unlikely that you will experience data loss, but it’s more likely than with an HDD)













  • We need more info about the drive. Is it new? If so, absolutely RMA it. Just isn’t worth the headache even if the self test reports fine. If not new, how many hours? Reads? Writes? Any failures reported by SMART? Et cetera.

    The more info, the better. I work in SSD failure analysis/firmware development.



  • Doombot1@beehaw.orgtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Great explanation. Yes - I’ve done this before! Built up a system with a RAID array but then realized I wanted a different boot drive. Didn’t really want to wait for dual 15Tb arrays to rebuild - and luckily for me, I didn’t have to! Because the metadata is saved on the discs themselves. If I had to guess (I could be wrong though) - I believe ‘sudo mdadm —scan —examine’ should probably bring up some info about the discs, or something similar to that command.