I’ve been self-hosting Nextcloud for sometime on Linode. At some point in the not too distant future, I plan on hosting it locally on a server in my home as I would like to save on the money I spend on hosting. I find the use of Nextcloud to suit my needs perfectly, and would like to continue using the service.

However, I am not so knowledgeable when it comes to security, and I’m not too sure whether I have done sufficient to secure my instance against potential attacks, and what additional things I should consider when moving the hosting from a VPS to my own server. So that’s where I am hoping from some input from this community. Wherever it shines through that I have no idea what I’m talking about, please let me know. I have no reason to believe that I am being specifically targeted, but I do store sensitive things there that could potentially compromise my security elsewhere.

Here is the basic gist of my setup:

  • My Linode account has a strong password (>20 characters, randomly generated) and I have 2FA enabled. It required security questions to set up 2FA, but the answers are all random answers that has no relation to the question themselves.
  • I’ve disabled ssh login for root. I have instead a new user that is in the sudo usergroup with a custom name. This is also protected by a different, strong password. I imagine this makes automated brute-force attacks a lot more difficult.
  • I have set up fail2ban for sshd. Default settings.
  • I update the system at the latest bi-weekly.
  • Nextcloud is installed with the AIO Docker container. It gets a security rating A from the Nextcloud scan, and fails on not being on the latest patch level as these are released slower for the AIO container. However, updates for the container is applied automatically, and maintaining the container is a breeze (except for a couple of problems I had early on).
  • I have server-side encryption enabled. Not client-side as my impression is that the module is not working properly.
  • I have daily backups with borg. These are encrypted.
  • Images of the server are also daily backed up on Linode.
  • It is served by an Apache web server that is exposed to outside traffic with HTTPS with DNS records handled by Cloudflare.
  • I would’ve wanted to use a reverse proxy, but I did not figure out how to use it together with the Apache server. I have previously set up Nginx Reverse Proxy on a test server, but then I used a regular Docker image for Nextcloud, and not the AIO.
  • I don’t use the server to host anything else.
  • Jeena@jemmy.jeena.net
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    For SSH I would disable login with password and only allow login with a ssh key. The other stuff sounds reasonable.

      • deeznutz@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        If you have more than one VPS to manage or multiple people that need access via SSH, you may want to look into using SSH certificates instead of keys. Keys get messy when you have to wrangle a lot of them and it’s a real pain in the but if you need to revoke multiple. It does require more than just generating a key pair and giving it to the server to trust though.

        • cyberwolfie@lemmy.mlOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Great tip - I don’t see myself running multiple servers, and I will be the only user needing access to them, so I guess ssh keys are sufficient.

  • PastThePixels@lemmy.potatoe.ca
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I may not be able to answer some of the more security-oriented questions, but one of the things I recommend is using a proxy to “hide” your home IP address. IP addresses can contain a lot of information including location data, so it’s a good idea to make things harder for attackers to figure out where you live. I’m pretty sure you can do this with a basic VPS setup, but I know for sure you can do this with Cloudflare (as I have it enabled on my server).

    As for getting reverse proxies set up from your Docker containers to the outside world using Apache, I can help. I use (rootless) Podman on my Raspberry Pi, meaning when I expose ports from my containers I have to choose port numbers greater than 8000. Once I have a port (let’s say 8080), and a subdomain (I’ll use subdomain.example.com), I just need to create a file in /etc/apache2/sites-available/ which I’ll call site.example.com.conf. The content usually looks something like this:

    
      ProxyPreserveHost On
      ProxyRequests Off
      ServerName subdomain.example.com
      ServerAlias subdomain.example.com
      ProxyPass / http://localhost:8080/
      ProxyPassReverse / http://localhost:8080/
    
    

    Then you just need to enter the commands sudo a2ensite subdomain.example.com and sudo systemctl reload apache2 and you should be able to access your container as a subdomain. You should just need to forward port 80 (and 443 if you want to set up Let’s Encrypt and HTTPS) on your router.

    Hope this helps!

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Thanks for the description, I’ll look closer into this and see if I can get this to work (on a test server at home first… :)).

      This thread is the first I’ve heard of Podman - is this something I should look into in favor of Docker, or would you say it is more a case of “pick one and stick to it”?

      • PastThePixels@lemmy.potatoe.ca
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Yeah, Podman is definitely one of those things I would say to do the latter with. It’s functionality is the same as Docker though (commands work almost 1:1, and even docker-compose works with Podman), it has better integration with other system components (like automatically creating systemd services to start containers when a computer is restarted), and it gets you away from Docker as a company while still being able to access their containers on Docker Hub.
        In the end though, I’d recommend sticking to what you’re familiar with. It’s always better to administer commands to your server that you know will work rather than learning as you go and hoping something doesn’t break.

  • thisisawayoflife@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Secure SSH. You should disable all password login capability and tighten the ciphers, KEX and MAC requirements. This will force modern SSH terminal use, something a lot of bots don’t do, so they won’t even get to the point of key exchange.

    https://cipherlist.eu/

    On your client, you can define an SSH config with a list of friendly host names that include direct IP addresses, the key to use to initiate login and whatever other properties you need. This way, you can just type in “ssh” and you don’t need to specify the key or IP address every time.

    Finally, configure Fail2Ban to ban/block on first failed SSH attempt. You won’t be falling to login if you’ve configured a config definition file and are using keys.

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thanks for the tip. I will be looking into setting up SSH keys fairly soon, and look more into strengthening ciphers et al.

      From a practical point of view, what is the likelihood of a brute-force login attempt to succeed? There are plenty of login attempts, but most of them are for root, and as I’ve disabled root-login that will fail no matter what. Other attempts are typically for generic other names such as ‘admin’, ‘user’ and ‘test’ that has no associated user on the server, as well as some weird choices that I can only imagine comes from some database breach.

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thanks, I’ll look into it. Is it primarily ease of use that makes you prefer this over running Docker on a more standard distribution?

      • h3ndrik@feddit.de
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Ease of use. They package the software for you. Make everything work together, do authentication. Patch everything and prepare updates and the most recent hardening tipps for the webserver. As well as configure fail2ban etc.

        I’d prefer some containers to Yunohost. But I don’t know of any other selfhosting solution that works as well and is as ‘fire and forget’. I like to recommend it to people who don’t have the time or skills to do everything themselves. Or who worry about getting the security bit right.

        I use it because it’s good enough for me and i like to do other things in the time it saves me.

        • cyberwolfie@lemmy.mlOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          That sounds convenient, and having looked at some videos, it seems very nice. I can see myself using this for things that I need to work properly, like Nextcloud, and maybe host other services in a more complicated way, to be able to learn more.

  • Haui
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    This all sounds very reasonable. One question remains: what is the use of a dedicated proxy if cloudflare is connected? I do use nginx proxy manager and host my dockerized services on subdomains via https. I suppose if the reverse proxy gets attacked, the main server stays online and hidden. Does cloudflare not hide your ip and prevent (some) ddos attacks?

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This is one of those areas that often has me confused… For now, the DNS entry with Cloudflare is set to ‘DNS Only’. That is perhaps a mistake on my part, and I should enable the proxy? Right now I can’t remember the reasoning for why I set it up like this.

      Originally I wanted to set up Nginx Reverse Proxy to serve other services than Nextcloud on the same server on different ports. That was the way that I found that was easily manageable at the time, and like the AIO container is set up now, accessing the IP address of my server automatically routes to Nextcloud, even if I had another service running. I could maybe configure Apache to do the same job as I want Nginx to do? At the time, I opted to get another VPS dedicated for other, smaller services instead as a temporary solution, that over time turned permanent. However, this will be important to me when/if I start hosting this locally instead, as I would want my server to host other services as well.

      • Haui
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Can relate. I‘m pretty much on the opposite end of this situation. I have a home server, hosting a fair amount of apps and it’s pretty integrated and polished but still a lot of things I want to do, some crucial before I even think of opening ports in my router.

        The issue for me is that my internet upload speed is trash allthough my provider is rather good.

        So I‘m thinking of moving the opposite direction and hosting my stuff on a vps so that I can use it and maybe share stuff with friends without being kneecapped by my upload.

        The obvious solution would be a fiber connection which is not available at my location yet (edge of a city in germany, hard to believe, I know).

        But to answer your question: you could probably pet apache do something like that but I‘m absolutely the wrong person to tell you how as I don’t have any experience with apache. I can help you configure npm (nginx proxy manager) and dns records but thats about it in this department.

        In any case, have a good one and hit me up if you want to discuss this further.

        • cyberwolfie@lemmy.mlOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Ah, I see. Hope for you that a fiber connection will be available in the not-too-distant future then. I would love to do this at home, but I’m going to need some serious study sessions to better understand home networking (and take appropriate action) before I start exposing services at home to the internet. I do wonder if I jumped onto this too fast, but I was just so incredibly fed up with relying on big tech monopolies for essential digital services…

          I guess my last question would be if you had an opinion on whether enabling proxy in Cloudflare is a no-brainer or not?

          • Haui
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Makes total sense that one would familiarize himself with networking/selfhosting before actually going live and putting their private data at stake. I respect that.

            Also, I would probably use cloudflare proxy but I don’t have experience with it yet so I‘d give it a quick search „cloudflare proxy vs dns only“ or something and see if any reason why you didn’t like it pops up.

            Also, I suggest you keep a log if you dont have one already. Every time I do maintenance (essentially, every time I log into ssh on my server) I make an entry to my log. That way you will know why you did what you did when you did

  • Maximilious@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I have Nextcloud hosted internally in a podman container environment. To answer some of your more security related questions, here’s how I have my environment set up:

    1. Cloudflare free tier with my own domain to proxy outside connections to the public domain name, and hide my external IP.

    2. A DMZ proxy server with a local traefik container with only ports required to talk to the internal Nextcloud server allowed, and inbound 443 only allowed from the internet (cloudflare).

    3. An Authelia container tied to the Nextcloud container using “Two-factor TOTP” app addon. Authelia is configured to point to a free DUO account for MFA. The TOTP addon also allows other methods of you want to bypass Authelia and use a simply Google auth or other app. I’ll be honest, this setup was a pain but it works beautifully when finally working.

    Note: Using Authelia removes Nextcloud from the authentication process. If you login through Authelia, if set up correctly it will pass the user information to Nextcloud and present thier account. There is a way to have “quadruple” authentication of you really want it, where you log in through Authelia, Authelia MFA, then Nextcloud and Nextcloud MFA, but who would want that? Lol.

    Another Note: If Authelia goes down for whatever reason, you can still log in through Nextcloud directly.

    1. I have all of my containers set to automatically pull updates with the latest tag. This bites me sometimes of major changes happen, but it’s typically due to traefik or mariadb changes and not Nextcloud or Authelia.

    2. I have my host operating system set to auto update and reboot once a week in the early morning.

    3. My data is shared through an NFS connection from my NAS that only allows specific IPs to connect. I’d like to say I’m using least privileged permissions in the share, but it’s a wide open share as my NFS permissions are not my strong suite.

    Hope the above helps!

    • cyberwolfie@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Thanks for your answers!

      1. Alright, I guess I should also use the Cloudflare proxy. I could not find the reason I had not enabled it previously.
      2. I’m a bit confused as to what a DMZ proxy server is compared to a reverse proxy. Is this a separate server you’ve set up specifically to handle inbound traffic where you’ve set up Traefik, or is this a container on your main server where you also host Nextcloud?
      3. As I understand it, Authelia is a SSO solution that seems very beneficial for when I am running several services from the same server. Right now, I only run Nextcloud on the VPS - is there any added security benefit of running it there also, or is this mostly for convenience when hosting multiple services?

      Setting up auto update and reboot once a week seems smart. Do you set this up with cron?