Anyone who knows me knows that I’ve been using next cloud forever, and I fully endorse anyone doing any level of self hosting should have their own. It’s just a self-hosted Swiss army knife, and I personally find it even easier to use than something like SharePoint.

I had a recurring issue where my logs would show “MYSQL server has gone away”. It generally wasn’t doing anything, but occasionally would cause large large file uploads to fail or other random failures that would stop quickly after.

The only thing I did is I went in and doubled wait_timeout in my /etc/mysql/mariadb.conf.d/50-server.cnf

After that, my larger file uploads went through properly.

It might not be the best solution but it did work so I figured I’d share.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 months ago

    Just read the other day here that Nextcloud runs much much better with PostgreSQL. Migrating to that (or the all in one installation) is my next big project.

      • 4am@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Looks like I’ve got a project for next weekend

    • tofubl
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      Interesting. Do you remember where you read this?

      The process seems simple enough. I’m on the nextcloud:stable docker image, so adding a postgres container is really easy, but it’s a scary task…

      • tofubl
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 months ago

        Okay, did the migration just now. Everything seems a little more responsive, but I wouldn’t call it way faster.

        Either way, it wasn’t very scary at all. For anybody coming after me:

        • add postgres container to compose file like so. I named mine “postgres”, added a “postgres” volume, and added it to depends_on for app and cron
        • run migration command from nextcloud app container like any other occ command and check admin settings/system for db state: ./occ db:convert-type --password $POSTGRES_PASSWORD --all-apps pgsql $POSTGRES_USER postgres $POSTGRES_DB
        • remove old “db” container and volume and all references to it from compose file and run docker compose up -d --remove-orphans
        • haplo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Thank you for this. I really dislike MySQL/MariaDB and favor SQLite whenever possible, or PostgreSQL otherwise. The DB migration of my Nextcloud instance was high in my to-do list, and your instructions saved me research time.

          • tofubl
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            Here’s a cool article I found on Nextcloud performance improvements, and connecting Redis over Unix sockets gave me a more substantial performance improvement than migrating to Postgres. Very happy I fell down this rabbit hole today.

            To note if you’re following the tutorial in the link above, and for people using the nextcloud:stable container together with the recommended cron container:

            • the redis configuration (host, port, password, …) need to be set in config/config.php, as well as config/redis.config.php
            • the cron container needs to receive the same /etc/localtime and /etc/timezone volumes the app container did, as well as the volumes_from: tmp
            • haplo@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Thank you for the link and the Redis pointers. I should double check that my Nextcloud setup is using Redis, it might well be misconfigured.

            • sj_zero@lotide.fbxl.netOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              If you do end up using postgresql, over time the database could end up getting fragmented and that can lead to increased latency, so routine pg_repacks imo are a worthwhile thing to schedule.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Well, another guy responded before you, so that would be the last time I heard it.

        It was probably on one of the posts in this community.