![](https://reddthat.com/pictrs/image/4802e205-69c2-4685-9a6e-fbcc167c3272.png)
![](https://fedia.io/media/94/1f/941fe7dc52ed6eb693e2fe993607e5a9ce1579dd7fc94c139a670cc5c0614dcd.png)
Before I start reading, if this has anything to do with differential privacy, I’m going to be disappointed.
Self Proclaimed Internet user and Administrator of Reddthat
Before I start reading, if this has anything to do with differential privacy, I’m going to be disappointed.
2nd best reporting in.
A faster db. Just the regular performance benefits, https://www.postgresql.org/about/news/postgresql-16-released-2715/
Also, Lemmy is built against v16 (now) so at some point it will eventually no longer JustWork
The script will be useless to you, besides for referencing what to do.
Export, remove pg15, install pg16, import. I think you can streamline with both installed at once as they correctly version. You could also use the in place upgrade. Aptly named: pg_upgradeclusters
But updating to 0.19.4, you do not need to go to pg16… but… you should, because of the benefits!
The downvotes you can see (on this post) are from accounts on your instance then. As this post is semi inflammatory it is highly likely to have garnered some downvotes.
Edit: I guess I was wrong regarding the logic of how downvotes work when we block them. As the http request (used too?) return an error when responding to a downvote. I’ll have to look at it again. As the only way it was/is 15, is if:
That awkward moment when you are the person they are talking about when running beta in production!
Bug:
subscribe & unsubscribe links are the wrong sizes on communities?listingType=Local
“Bug”:
Your IP is visibleto
Suggestions:
Sent from next.reddthat.com :)
Right? Such a good vibe.
Glad you could find an intermediate home! Your community is always welcome back if needed.
:( Looks like its only a good deal for the Candy. As it’s a bundle of 4, and all the others are $14.50 regularly… I guess I could…
I miss the Raspberry Candy one. That was sooo good
Since the 11th @ 9am UTC, LW has seen a 2 fold increase of activities. If my insider knowledge is right (and math) it’s 7req/s average up from 3req/s.
Lucky for both of us we are not subbed to every community on LW but I think we are subbed just enough to be affected.
Relevant: https://reddthat.com/comment/8316861 tl;dr. The current centralisation results in a lemmy-verse theoretical maximum for of 1 activity per 0.3 seconds, or 200 activities per minute. As total transfer of packets is just under 0.3 seconds between EU -> AU and back.
Edit: can’t math when sleepy
What if I told you the problems LW -> Reddthat have is due to being geographically distant on a scale of 14000km?
Problem: Activities are sequential but requires external data to be validated/queried that doesn’t come with the request. Server B -> A, says here is an activity. In that request can be a like/comment/new post. An example of a new post would mean that Server A, to show the post metadata (such as subtitle, or image) queries the new post.
Every one of these outbound requests that the receiving server does are:
So every activity that results in a remote fetch delays activities. If the total activities that results in more than 1 per 0.6s, servers physically cannot and will never be able to catch up. As such our decentralised solution to a problem requires a low-latency solution. Without intervention this will evidently ensure that every server will need to exist in only one region. EU or NA or APAC (etc.) (or nothing will exist in APAC, and it will make me sad) To combat this solution we need to streamline activities and how lemmy handles them.
Batching, parallel sending, &/or moving all outbound connections to not be blocking items. Any solution here results in a big enough change to the Lemmy application in a deep level. Whatever happens, I doubt a fix will come super fast
Lemmy has to verify a user (is valid?). So it connects to a their server for information. AU -> X (0.6) + time for server to respond = 2.28s but that is all that happened.
- 2.28s receive:verify:verify_person_in_community: activitypub_federation::fetch: Fetching remote object http://server-c/u/user
- request completes and closed connection
Similar to the previous trace, but after it verfied the user, it then had to do another from_json
request to the instance itself. (No caching here?) As you can see 0.74 ends up being the server on the other end responding in a super fast fashion (0.14s) but the handshake + travel time eats up the rest.
- 2.58s receive:verify:verify_person_in_community: activitypub_federation::fetch: Fetching remote object http://server-b/u/user
- 0.74s receive:verify:verify_person_in_community:from_json: activitypub_federation::fetch: Fetching remote object http://server-b/
- request continues
Fetching external content. I’ve seen external servers take upwards of 10 seconds to report data, especially because whenever a fediverse link is shared, every server refreshes it’s own data. As such you basically create a mini-dos when you post something.
- inside a request already
- 4.27s receive:receive:from_json:fetch_site_data:fetch_site_metadata: lemmy_api_common::request: Fetching site metadata for url: https://example-tech-news-site/bitcoin-is-crashing-sell-sell-sell-yes-im-making-a-joke-here-but-its-still-a-serious-issue-lemmy-that-is-not-bitcoin
Sometimes a lemmy server takes a while to respond for comments.
- 1.70s receive:community: activitypub_federation::fetch: Fetching remote object http://server-g/comment/09988776
[1] - Metrics were gathered by using https://github.com/LemmyNet/lemmy/compare/main...sunaurus:lemmy:extra_logging patch. and getting the data between two logging events. These numbers may be off by 0.01 as I rounded them for brevity sake.
How far behind we are now:
The rate at which activities are falling behind (positive) or if we are catching up (negative)
Should be already fixed. I’ve logged out and in on Jerboa.
We rebuilt the Lemmy container with an extra logging patch. Seems build docs need some work? as that’s the only difference in the past 1-2 days, except for moving to postgres 16…
Thanks for the ping.
I’ve gone back to mainline Lemmy. @Morpheus@lemmy.today check now please
Oh I was wrong, after further reading this looks to be a lot better than what I was thinking.
I must have been thinking about another methodology of attempted privacy over a dataset.