So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;

  • Is Synology the best/easiest way to start? If not, what are the closest alternatives?
  • What OS should i go for? OMV, Synology’s OS, or UNRAID?
  • Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.

Appricate any tips :sparkles:

  • Synapse@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 year ago

    If you want a “setup and forget” type of experience, synology will serve you well, if you can afford it. Of you are more of a tinkerer and see yourself experimenting and upgrading in the future, then I recommend custom built. OMV is a solid OS for a novice, but any Linux distro you fancy most can do the job very well!

    I’ve started my NAS journey with a very humble 1-bay synology. For the last few years I am using a custom built ARM NAS (nanopi m4v2), with 4-bays and running Armbian. All my services run on docker, I have Jellyfin, *arr, bitwarden and several other servicies running very reliably.

    • redballooon@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      And if you’re not sure how much of tinkering you want to do a Synology with docker support is a good option.

    • entropicdrift@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      ^ This. I have an M1 Mac mini running Asahi Linux with a bunch of docker containers and it works great. Run Jellyfin off of a separate stick PC running an Intel Celeron with Ubuntu Mate on it. Basically I just have docker compose files on those two machines and occasionally ssh in from my phone to sudo apt update && sudo apt upgrade -y (on Ubuntu) or sudo pacman -Syu (on Asahi) and then docker compose pull && docker compose up -d

    • Scrath@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Can definitely confirm this. I started with a Proxmox system which had a TrueNAS VM. TrueNAS just used a USB HDD for storage though. Setting everything up and getting the permissions set correctly so I could connect my computers was a pain in the ass though.

      Later I bought a synology and it just works. Only thing I would recommend is getting good HDDs. I bought Toshiba MG08 16TB drives and while they work great, they are obnoxiously loud during read and write operations. They are so loud, that even though the NAS is in a separate room I have to shut it off at night.

      Meanwhile the Seagate Ironwolf drive I used for TrueNAS was next to my bed for multiple months and was basically silent.

  • jws_shadotak@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    Synology is generally a great option if you can afford the premium.

    Unraid is a good alternative for the poor man. Check this list of cases to build in. I personally have a Fractal R5 which can support up to 13 HDD slots.

    Unraid is generally a better bang for your buck imo. It’s got great support from the community.

  • talentedkiwi@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    I have proxmox on bare metal, an HBA card to passthrough to TrueNAS Scale. I’ve had good luck with this setup.

    The HBA card is to passthrough to TrueNAS so it can get direct control of the drives for ZFS. I got mine on eBay.

    I’m running proxmox so that I can separate some of my processes (e.g. plex LXC) into a different VM.

    • thejevans@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      This is a great way to set this up. I’m moving over to this in a few days. I have a temporary setup with ZFS directly on Proxmox with an OMV VM for handling shares bc my B450 motherboard IOMMU groups won’t let me pass through my GPU and an HBA to separate VMs (note for OP: if you cannot pass through your HBA to a VM, this setup is not a good idea). I ordered an ASRock X570 Phantom Gaming motherboard as a replacement ($110 on Amazon right now. It’s a great deal.) that will have more separate IOMMU groups.

      My old setup was similar but used ESXi instead of Proxmox. I also went nuts and virtualized pfSense on the same PC. It was surprisingly stable, but I’m keeping my gateway on a separate PC from now on.

      • Yote.zip@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        If you can’t pass through your HBA to a VM, feel free to manage ZFS through Proxmox instead (CLI or with something like Cockpit). While TrueNAS is a nice GUI for ZFS, if it’s getting in the way you really don’t need it.

        • thejevans@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          TrueNAS has nice defaults for managing snapshots and the like that make it a bit safer, but yeah, as I said, I run ZFS directly on Proxmox right now.

          • Yote.zip@pawb.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Oh sorry for some reason I read OMV VM and assumed the ZFS pool was set up there. The Cockpit ZFS Manager extension that I linked has good management of snapshots as well, which may be sufficient depending on how much power you need.

    • InformalTrifle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?

      • Yote.zip@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        This is a fairly common setup and it’s not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.

        Usually:

        • Proxmox on bare metal

        • TrueNAS Core/Scale in a VM

        • Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there

        • If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I’ll take their word for it)

        • If you run your app stack through LXCs, just set them up through Proxmox normally

        • Set up an NFS share through TrueNAS, and connect your app stack to that NFS share

        • (Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS

        • rentar42@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn’t matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox “under it”, right? Only thing I’d need right now is some SSD for Proxmox itself.

          • Yote.zip@pawb.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            Proxmox would be the host on bare metal, with your current install as a VM under that. I’m not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.

            You shouldn’t need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.

            (I’m probably misunderstanding what you’re trying to do?)

            • rentar42@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I just thought that if all storage can easily be “passed through” to a VM then it should in theory be very simple to boot the existing installation in a VM directly.

              Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through “half of a drive” which I don’t think works like that. Also, I’m using ZFS for my OS disk and I don’t feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)

              • Yote.zip@pawb.social
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                That should work, but I don’t have experience with it. In that case yeah you’d need another separate drive to store Proxmox on.

        • talentedkiwi@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          This is 100% my experience and setup. (Though I run Debian for my docker VM)

          I did run docker in an LXC but ran into some weird permission issues that shouldn’t have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.

          I do run my plex and jellyfin on an LXC tough. No issues with that so far.

        • InformalTrifle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?

          • Yote.zip@pawb.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it’s common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.

            (TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won’t work to unhook it from the host and give it to the guest.)

  • rentar42@kbin.social
    link
    fedilink
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    Just throwing out an option, not saying it’s the best:

    If you are comfortable with Linux (or you want to be come intimately familiar with it), then you can just run your favorite distribution. Running a couple of docker containers can be done on anything easily.

    What you’re losing is usually the simple configuration GUI and some built-in features such as automatic backups. What you gain is absolute control over everything. That tradeoff is definitely not for everyone, but it’s what I picked and I’m quite happy with it.

    • Fjor@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yeah already quite familiar, already got a server but looking for something more premium, but essentially deliver the most easy platforms for the rest of the family to use.

      • PlutoniumAcid@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Also, you could run Linux off of a real CPU. My experience is that my DS916+ is way underpowered even with 8 GB memory. I use my NAS for actual storage, and an old Intel mainboard w/16GB RAM for actual CPU work.

  • ebits21@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    My Synogy NAS was super easy to set up and has been very solid. Very happy with it. I’m sure there’s other solutions though.

    • thirdBreakfast@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      This was the route I went with when I started, and I’ve never had cause to regret it. For people near the start of their self-hosting journey, it’s the no-hassle, reliable choice.

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Eh… TrueNAS UI basically takes care of any zfs learning curve. The main thing I’d note is that RAID 5 & 6 can’t currently be expanded incrementally. So you either need to use mirroring, configure the system upfront to be as big as you expect you’ll need for years to come, or use smaller RAID 5 sets of disk (e.g. create 2 raid 5 volumes with 3 disks each instead of 1 RAID 5 volume with 6 disks).

        Not sure what you’re referring to as an easy backup option that zfs excludes, but maybe I’m just ignorant 🙂

      • rentar42@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I agree with the learning curve (personally I found it worthwhile, but that’s subjective).

        But how does ZFS limit easy backup options? IMO it only adds options (like zfs send/receive) but any backup solution that works with any other file systems should work just as well with ZFS (potentially better since you can use snapshots to make sure any backup is internally consistent).

        • cyberpunk007@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Because you can’t use typical back product software. If you do it the right way, you’re using my ZFS send and receive to another machine running ZFS which significantly adds to cost.

          • rentar42@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s an extremely silly reason not to use a specific tool: Tool A provides an alternative way to do X, but I want to do X with some other tool B (that’ll also work with tool A), so I won’t be using tool A.

            Send/receive may or may not be the right answer for backing up even on ZFS, depending on what exactly you want to achieve. It’s really nice when it is what you want, but it’s no panacea (and certainly no reason to avoid ZFS, since its use it 100% optional).

            • cyberpunk007@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I really don’t get your meaning of my apparent silly reason. You can’t use Acronis, Veeam, or other typical backup products with ZFS. My point is this is a barrier to entry. I disagree that it’s not silly for a home user to build another expensive NAS just to do ZFS send and receive which would be the proper way.

              I don’t consider backups optional.

  • pascal@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    The most common software choices are TrueNAS and UNRAID.

    Depending on your use-case, one is better than the other:

    TrueNAS uses ZFS, which is great if you want to be absolutely sure the unreplaceable data on your disks is 100% safe, like your personal photos. UNRAID has a more flexible expansion and more power efficient, but doesn’t prevent any bit flip, which is not really an issue if you only store multimedia for streaming.

    If you prefer a hardware solution ready to use, Synology and QNAP are great choices so long you remember to use ZFS (QNAP) or BTRFS (Synology) as filesystem.

    • PurpleTentacle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Unraid 6.12 and higher has full support for ZFS pools. You can even use ZFS in the Unraid Array itself - allowing you to use many, but not all, of ZFS extended features. Self healing isn’t one of those features, though, it would be incompatible with Unraid’s parity approach to data integrity.

      I just changed my cache pool from BTRFS to ZFS with Raid 1 and encryption, it was a breeze.

      I generally recommend TrueNAS for projects where speed and security are more important than anything else and Unraid where (hard- and software-)flexibility, power efficiency, ease of use and a very extensive and healthy ecosystem are more pressing concerns.

    • Fjor@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Do either of them matter in terms of life of the hardisks? My server just had one of its HDDs reach EoL :| Kind of want to buy something that will last a very long time. Also, not familiar with ZFS, but read that Synology uses Butterfs - which always sounds good in my ears, been having a taste of the filesystem with Garuda on my desktop.

      • pascal@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Yes, ZFS is commonly known for heavy disk I/O and also huge RAM usage, the rule used to be “1GB of RAM for every TB of disk” but that’s not compulsory.

        Meanwhile, about BTRFS, keep in mind that Synology uses a mixed recipe because the RAID code of BTRFS is still green and it’s not considered production ready. Here’s an interesting read about how Synology filled the gaps: https://daltondur.st/syno_btrfs_1/

        • Monkey With A Shell@lemmy.socdojo.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          The only place ZFS seems to use a sizable amount of RAM is for the arc memory cache system which is an really nice feature when you have piles of small file access going on. For me some of the most high access things are the image stores for lemmy and mastodon that combine up to just under 200GB right now but are some crazy high number of files. Letting the system eat up idle ram to not have to pull all those from disk constantly is awesome.

      • Kelsenellenelvial@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Something kind of unique about UnRaid is the JBOD plus parity array. With this you can keep most disks spun down while only the actively read/written disks need to be spun up. Combine with an SSD cache for your dockers/databases/recent data and UnRaid will put a lot less hours(heat, vibration) on your disks than any raid equivalent system that requires the whole array to be spun up for any disk activity. Performance won’t be as high as comparably sized RAID type arrays, but as bulk network storage for backups, media libraries, etc. it’s still plenty fast enough.

  • highfiveconnoisseur@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Do you have any old hardware that doesn’t have a job? That is a great place to start. Take some time try out different solutions (proxmox, unraid, casa OS). Then as you nail down your needs you can better pick hardware.

    • Fjor@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah this is what I have been doing so far, loads of spare parts - running Debian atm. So kind of looking for ‘the next step’ rn.

  • DichotoDeezNutz@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I use UNRAID, I didn’t want to pay for a license originally but having the option to mix and match drives and have redundancy is nice.

    I also use the built in docker feature to host most of my services.

    • PurpleTentacle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Unraid is also awesome for places with high energy cost: Unlike with your typical RAID / standard NAS, it allows you to spin down all drives that aren’t in active use at a relatively minor write speed performance penalty.

      That’s pretty ideal for your typical Plex-server where most data is static.

      I built a 10HDD + 2SSD Unraid Server that idles at well below 30W and I could have even lowered that further had I been more selective about certain hardware. In a medium to high energy cost country, Unraid’s license cost is compensated by energy savings within a year or two.

      Mixing & matching older drives means even more savings.

      Simple array extension, single or dual parity, powerful cache pool tools and easily the best plugin and docker app store make it just such a cool tool.

      • Fjor@lemm.eeOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        This sounds very good, i like what i am reading and hearing about unraid! And I do live somewhere with very high energy costs…

    • gdelopata@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I run most of my stuff on k8s, but I really enjoy simple docker ecosystem of apps that home assistant supervisor provides. Unraid app approach looks similar, preconfigured and working together. Even thou I don’t need fancy nas, I might try unraid just to evaluate apps ecosystem. How to u find their community apps?

      • DichotoDeezNutz@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I usually search thru the apps and they install as docker containers, I can edit the configs after the fact, it’s pretty nice. There’s also a terminal so I can run regular docker commands too.

  • Corgana@startrek.website
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’ve found CasaOS to be the simplest to set up and get going. I tried TrueNAS for a year, but wish I had started with CasaOS.

      • Corgana@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Haven’t tried OMV, but the lesson I learned with TrueNAS is that software designed primarily for NAS has a lot of features I don’t care about, and the other apps can be finicky. I’m not storing petabytes of data. CasaOS was the closest I found to “just works”.

        There’s also Umbrel OS which looks promising, but I’ve been happy with CasaOS so haven’t felt the need to switch.

    • TBi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      CasaOS looks interesting. But i prefer OpenMediaVault for the moment.

  • cesium@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I wouldn’t recommend a Synology NAS if you intend to stream content with Plex/Jellyfin. It simply lacks the horsepower most of the time. I should just go with a DIY solution, imo. If you just want to through components that you have lying around together, I would go with Unraid. Unraid doesn’t really care what you throw at it hardware wise.

  • PuppyOSAndCoffee@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    A NAS serves data to clients; I know this is tilting conventional wisdom on it’s head but hear me out: go for the most inexpensive, lowest power storge-only-NAS that you can tolerate, and instead…put your money into your data transport (network) and into your clients..

    As much as possible, simplify your life - move processing out of middle tiers, into client tiers.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    k8s Kubernetes container management package

    [Thread #164 for this sub, first seen 24th Sep 2023, 20:25] [FAQ] [Full list] [Contact] [Source code]

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It’s fine, but it’s really only good as a NAS. BHyve is a terrible virtualization platform. With something like Open Media Vault you get access to KVM, which is a much better way to run a virt or two on the side.

      • Damage@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You’re one of the few who mentioned OMV in the thread and I was wondering why, it works great for me as a VM on proxmox… the only gripe I have is that sometimes the GUI decides I’ve made changes to the configuration and asks me to apply them, only to fail and get stuck with the notification.