I have quite an extensive collection of media that my server makes available through different means (Jellyfin, NFS, mostly). One of my harddrives has some concerning smart values so I want to replace it. What are good harddrives to buy today? Are there any important tech specs to look out for? In the past I didn’t give this too much attention and it didn’t bite me, yet. But if I’m gonna buy a new drive now, I might as well…

I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct? How about different brands - do any have particularly good or bad reputation?

Thanks for any hints!

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    2 months ago

    Buy recertified enterprise grade disks from https://serverpartdeals.com. Prices were around $160/16TB the last time I checked. Mix brands and models to reduce simultaneous failure. Use more than 1-disk redundancy. If you can’t buy from SPD, either find an alternative or buy external drives and shuck them. Use ZFS to know if your data is correct. I’ve been dealing with funny AMD USB controllers recently and the amount of silent data corruption I’d have gotten if not for ZFS is ridiculous.

    • Loulou@lemmy.mindoki.com
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 months ago

      This is incredible!

      American sites like this so rarely ship to France, or it costs a litteral fortune just in shipping, here it’s 130€ for a 12TB shipping included!

      Wow.

      I Do Not Need A 12TB Hard drive.

      I Do Not Need a 12 TB Hard drive!

      I mean or do I?

      Thanks 💖

        • TheHolm@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          2 months ago

          I would not trust these kind of dives in the mirror. IMHO RAID6 is the only way.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Due to risk of failure or risk of data corruption because the mirror can’t tell which drive is right when there’s a difference?

            • TheHolm@aussie.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              ZFS or BTRF mirror will know which side is at fault due to checksums. I’m more concern about simultaneous falures of two disks. Rebuilding of a RAID puts lots of pressure on remaining disks, so probability that remaining one dies too is much higher. with RAID6 3 disks need to die to lost date, which is less likely but not impossible.

            • turmacar@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              2 months ago

              The second one.

              Mirroring is good for speed, but a storage mechanism with parity checks will always be more recoverable. And you will have far more storage available.

              • Avid Amoeba@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 months ago

                I think data checksums allow ZFS to tell which disk has the correct data when there’s a mismatch in a mirror, eliminating the need for 3-way mirror to deal with bit flips and such. A traditional mirror like mdraid would need 3 disks to do this.

    • pedroapero@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I use BTRFS for the same. Being able to check for and repair silent corruptions is a must (and this is without needing to read the whole drives, only the actual files). I’ve had a lot of them over the years, including (but not only) because of a cheap USB controller also.

    • Pacmanlives@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Holy cow these are way cheaper than anything I have seen before. I am in a RAID 5 setup so if a disk or two dies I am okay.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        If you can, move to a RAID-equivalent setup with ZFS (preferred in my opinion) in order to also know about and fix silent data corruption. RAIDz1, RAIDz2 would do the equivalent to RAID5, RAID6. That should eliminate one more variable with cheap drives.

        • Pacmanlives@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 months ago

          ZFS is a no go for me due to not being able to add larger disk and then expand my pool size on the fly. MDADM and LVM+XFS have treated me well the past few years. I started with an 12tb pool and now over 50 tb pool

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 months ago

            Not that I want to push ZFS or anything, mdraid/LVM/XFS is a fine setup, but for informational purposes - ZFS can absolutely expand onto larger disks. I wasn’t aware of this until recently. If all the disks of an existing pool get replaced with larger disks, the pool can expand onto the newly available space. E.g. a RAIDz1 with 4x 4T disks will have usable space of 12T. Replace all disks with 8T disks (one after another so that it can be done on the fly) and your pool will have 24T of space. Replace those with 16T and you get 48T, and so on. In addition you can expand a pool by adding another redundant topology just like you can with LVM and mdraid. E.g. 4x 4T RAIDz1 + 3x 8T RAIDz2 + 2x 16T mirror for a total of 44T. Finally, expanding existing RAIDz with additional disks has recently landed too.

            And now for pushing ZFS - I was doing file based replication on a large dataset for many years. Just going over all the hundreds of thousands of dirs and files took over an hour on my setup. That’s then followed by a diff transfer. Think rsync or Syncthing. That’s how I did it on my old mdraid/LVM/Ext4 setup, and that’s how I continued doing on my newer ZFS setup. Recently I tried using ZFS send/receive which operates within the filesystem. It completely eliminated the dataset file walk and stat phase since the filesystem already knows all of the metadata. The replication was reduced to just the diff file transfer time. What used to take over an hour got reduced to seconds or minutes, depending on the size of the changed data. I can now do multiple replications per hour without significant load on the system. Previously it was only feasible overnight because the system would be robbed of IOPS for over an hour.

            • Pacmanlives@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              I wonder if that’s a new feature. IIRC the issue was with vdevs in ZFS in the pool expansion. I am a FreeBSD user and do have some jails running. I do like ZFS a lot it’s way more mature then BTRFS on the Linux

    • yggstyle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      This. They provide outstanding insights and the articles they provide alongside the data are quite good.

  • SomeoneSomewhere@lemmy.nz
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    2 months ago

    Any hard drive can fail at any time with or without warning. Worrying too much about individual drive families’ reliability isn’t worth it if you’re dealing with few drives. Worry instead about backups and recovery plans in case it does happen.

    Bigger drives have significantly lower power usage per TB, and cost per TB is lowest around 12-16TB. Bigger drives also lets you fit more storage in a given box. Drives 12TB and up are all currently helium filled which run significantly cooler.

    Two preferred options in the data hoarder communities are shucking (external drives are cheaper than internal, so remove the case) and buying refurb or grey market drives from vendors like Server Supply or Water Panther. In both cases, the savings are usually big enough that you can simply buy an extra drive to make up for any loss of warranty.

    Under US$15/TB is typically a ‘good’ price.

    For media serving and deep storage, HDDs are still fine and cheap. For general file storage, consider SSDs to improve IOPS.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    I’d like to second the ‘manufacturer doesn’t matter, all drives are going to fail’ line, but specific models from manufacturers will have a much higher failure rate than others.

    Backblaze, for example, publishes quarterly(ish?) stats showing the drives with the highest failure rates in terms of percentages, so you can kind of get a good view on if there’s a specific drive model you should maybe avoid.

    Or just buy an actual enterprise drive, avoid SMR, and have backups is also a sane approach.

    • roofuskit@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Some manufacturers have lower failure rates overall. But yes, you do have to mind the specific model.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Do be aware that Backblaze drive access patterns will probably be quite different from yours. So if there’s a really good deal on something with a bit higher failure rate, but your usage pattern is pretty tame, it may be worth taking the gamble.

    • anamethatisnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 months ago

      Interesting that Toshiba/Seagate has best 16TB stats and WDC bad ones in comparison, but for 14TB it’s reversed. My homelab disks apparently has 0.71% risk of dying after 22 months (seagate exos x16 st16000nm001g).
      edit: WDC does good in 16TB too, their only outlier there could be due to low number of disks in deive count. And the same is true when checking total no of disks for 14TB.

      • roofuskit@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Those 14TB WD drives are workhorses. I run refurbished ones in my home server and have never had any issues. And they are significantly faster than the rest of my spinning rust drives.

  • walden@sub.wetshaving.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 months ago

    There are two types, CMR and SMR. You can read online about the differences. CMR is better because SMR tries to be all fancy in order to increase capacity, but at the cost of speed and data integrity.

    It won’t be front and center in the specs of a particular drive, but you usually find the info somewhere.

    I wouldn’t worry about higher capacity failing sooner. If you have 10x4TB vs 2x20TB, that’s 5x as many drives to go bad. So a 20TB drive would need a 5x worse fail rate to be considered worse. A pro of larger (fewer) drives is lower power consumption. 5-10 watts per drive doesn’t sound like much, but it adds up.

  • tobogganablaze@lemmus.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    After I had two WD drives fail in my old NAS so I switched to all Seagate on my next build. Currently running 9x 20TB Exos X20, though for only about a year now, so no issues should be expected, yet.

    I think the most important thing is that you pick a drive that is meant for NAS/server use (so rated for running 24/7). And having manufacturere warrenty is also nice. My Seagate drives have 60 months (which is considerably more then the 36 months that my WD drives had).

    • RyanOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 months ago

      my currently failing drive is a WD as well… 🥴 I bought it a year ago, I think…

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 months ago

        Switching wholesale from a brand or model to another could be counterproductive. There are myriad of reasons why drives can fail that aren’t related to the brand and the model. What if you unknowingly switch to a less reliable model because of such a reason? You’d end up worse off. For example according to Backblaze’s data, Seagate is generally worse than WD.

        A better way to do this is to mix brands and models so that there’s less probability to fail at the same time. I have both WD and Seagate in a single storage pool, even if the Seagate model is objectively less reliable according to Backblaze.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

    I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

    4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

    Backblaze has pretty good drive reviews as well, with real world failure rate data and all.

    • gm0n3y@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I run only used hgst. I have 6 x 3tb drives that are all 90k hour plus and I recently expanded to some new to me 12tb hgst. I always do badblocks test when I get the drive which took 4 days on the 12 tbs. One of them failed and I returned it to Amazon they shipped another and the replacement was perfect. If they package it poorly just return it right away and choose a different distributor.

  • Ugurcan@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 months ago

    One thing no one will tell you HOW LOUD some HDDs could get under load. You may not want any of those disks around if you’re keeping your server around your living spaces.

    Just check dB values in the spec sheets.

    • RyanOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      That’s a good hint, although I wouldn’t mind too mich. personally. My server is located in the basement.

    • yonder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Depending on the use, you may be able to spin then down when not in use, but that’s not always possible for some applications.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    My last I have bought are the Toshiba N300 15tb helium drives.
    Didnt write much to it but they were cheap and seemed quiet enough to have around in my room (where I also sleep)

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I have and while they sure are loud, dampening the NAS with foam tape (had some double adhesive tape from buying LED strips laying around) quietened it enough to be managable.

  • SaintWacko@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I use Seagate Ironwolf 4TB drives in mine. Bought them all used, $50-60 each. Check on eBay and Facebook marketplace

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    One important thing, ensure the drive is CMR, the reason is that you likely want a RAID, and non-CMR disks take so long to read the entire disk that the chances of a second failure while recovering from a disk failure is significant.

    That being said, how are you keeping track of the disks state? I built my RAID recently, and your post made me realize that I have nothing to notify me if one of the disks shows early signs of problems.

    • DeathByDenim@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I just use the built-in email function that comes with mdadm. If a drive fails, I’ll know right away and replace it with a spare. You do need your server to be able to send emails with something like postfix.

      If you have hardware RAID, there’s often a monitoring tool that comes with it or at the very least a command-line utility that can report the RAID state which you can then use in a script.

    • RyanOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I don’t keep track actively. I noticed problems when reading a file and looked at the drive with smartctl for that reason. Does anybody know how to keep track actively?

  • user68k@wired.bluemarch.art
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    2 months ago

    At home I use two Toshiba MG09ACA18TE’s and they work like a charm. I’ve bought them at around US$20/TB and it was the best price/TB offer at that time.

    At work we use Exos X18’s and Exos X20’s without any problem at all.

  • geography082@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I have an external usb hdd , wd passport 3TB from 10 years ago (healthy) connected to a Chinese N100 mini pc. I have proxmox on it, 5 lxc containers, 30 docker containers running apps, plex, calibre web.