Good day to all! Over the last 30 minutes or so, I’ve been having issues loading beehaw.org. Sometimes CSS is missing and the page layout is broken, and others there is a server side NGINX error.
Just wanted to make the admins aware this is happening. There are some NGINX settings that can be adjusted to make more threads available to NGINX if it is hitting a worker limit.
Thanks. The admins are aware and are looking into the root cause.
It’s been 8 days, still ongoing with multiple instances, and I do not see any open issue about ‘nginx 50x’ errors on Github project for lemmy. See public cry: https://lemmy.ml/post/1453121
Yes, Beehaw is struggling with uptime. From talking with the admins, this really isn’t an nginx issue. It’s more that the Lemmy code itself is immature, with memory leaks and SQL performance issues, and those issues are becoming more disruptive as the usage explodes.
If you’ve got development skills, helping out the Lemmy project on Github is probably the best way to help. If not, then just press F5 with the rest of us when the site goes down for a bit.
If you’ve got development skills, helping out the Lemmy project on Github is probably the best way to help.
I have been, I’m RocketDerp on Github. I’ve been watching for weeks how none of the people running the major sites have opened an issue on observable problems, so I have done so myself:
Major data integrity issues ignored since June 14 issue opened: https://github.com/LemmyNet/lemmy/issues/3101
Obvious user-interface signs of the same problem reported June 19: https://github.com/LemmyNet/lemmy/issues/3203
The problems were going on weeks before I created these issues, and they are still being ignored. It wasn’t in the 0.18 announcement today, etc.
I’m not an official spokescritter, but I can assure you the Beehaw admins aren’t ignoring the issues. But ultimately it’s going to come down to someone getting PRs in to the code. I hope someone gets some performance-focused PRs in soon.
I’m not an official spokescritter, but I can assure you the Beehaw admins aren’t ignoring the issues.
They are not informing the end-users (and flocking new server installers) of the problem, they are leaving people like me wasting their time calling out the problem. Denial isn’t just a river in Egypt. Lemmy isn’t scaling, it’s falling flat on it’s face, and the federation protocols of doing one single like per https transaction are causing servers to overload peer servers.
Where are the server logs? Why are the crashes not being shared to developers? Do i really have to build up an instance with 5000 users to get access to the data that Beehaw’s servers are logging each hour?
What are you asking for? I’m not smart enough to know what is going on here, but can relay the request to someone who is if you’re willing to dumb it down for me and ask nicely
What are you asking for?
Right out of the Lemmy documentation for servers:
journalctl -u lemmy
Log them to a file and dump them somewhere public, like a github repository. What is gong on in these logs when 500 errors are happening?
Why are the crashes not being shared to developers?
Because not every issue we’re experiencing even the 500’s , are a result of Lemmy or their code. There is no reason to share that with them.
Because not every issue we’re experiencing even the 500’s , are a result of Lemmy or their code.
Then what are they, when Nginx is failing to talk to the NodeJS app? I also consider this more than code, as they are also giving recommendations for performance tuning various components, etc.
I have a lot of suspicion so far that federation activity is causing 500 and other errors due to how it queues (swarms) other peers. It isn’t just the lemmy-ui webapp and end-users.
Same thing. Another heavy used Lemmy instance has reports of the same problem: https://lemmy.ml/post/1271936
Interesting. I noticed similar behavior on https://startrek.website as well. I wonder if something else more global is going on?