From what I can piece together, there were some smaller flocks of Reddit users coming to Lemmy before I started using Lemmy hours each day - 64 days ago on May 25.

May 25, every Lemmy instance on the recommend list was crashing for me. Very obvious signs of PostgreSQL performance problems in the code. Refresh of the listing would fail 1 out of 5 times, often as much as half. Beehaw, Lemmy.ml, Lemmy.world and I visted the non-English instances too - but I really did not see anything of what I would consider significant post and comment activity.

May 29, I am searching and reading Lemmy content now for 4 days (between constant crashing). I can nod find the developers sharing their PostgreSQL problems in communities and trying to fix crashes. They seem to be avoiding using Lemmy, why? I don’t understand. But I keep reading.

June 1, it was clear that server crashing was everywhere and nothing was being done about it. I start reading GitHub issues and pull requests multiple times a day, and trying to understand the priorities of the project since I can find no Lemmy community for discussing the ongoing server overload and performance problems in lemmy_server - it must be somewhere? Discussions are free to host on GitHub, but they are disabled by the project leaders. They are not using Github discussions, they are not using Lemmy communities. I’m perplexed.

June 2, I find out that the project leaders run lemmy.ml - so I focus on hanging out there to witness change management. I finally managed to get an account created on lemmy.ml past all the crashing.

June 4, GitHub Issue 2910 is opened. The PostgreSQL TRIGGER is directly identified as causing servers to “blow up”. This is a very easy fix, it can even be done without having to recompile lemmy_server and make a back-end release. A bash script or even just a web page of PostgreSQL steps (like the existing Lemmy “from scratch install” has) would provide huge and immediate relief.

June 4 was a Sunday. I watch project leaders who do not work on weekends come in Monday on June 5, Tuesday June 6, Wednesday June 7, etc and ignore dramatic Issue about PostgreSQL, issue 2910. I am still looking over lemmy.ml between constant server crashes trying to find evidence that the project leaders of Lemmy actually ask for help in !postgresql@lemmy.ml from the Lemmy community. Crickets, they don’t use Lemmy to discuss and seek help regarding the crashes of Lemmy. I am wildly perplexed and I do not understand the social hazing yet.

Another weekend the project leaders aren’t around. Monday June 10 comes. Issue 2910 and the constant server crashes are not discussed on Lemmy. The whole Lemmy community is abuzz that in 3 weeks Reddit is shutting down the API and something must be done to prepare.

June 13

June 13. With social hazing as the main leadership priority, promoting Matrix chat instead of Lemmy communities such as !postgresql@lemmy.ml , the project leaders encourage big server hardware upgrades and upgrade Lemmy.ml https://lemmy.ml/post/1234235 - but there is no significant improvement and crashing are still constantly happening because of PostgreSQL and issue 2910 about PostgreSQL being ignored now for 9 days.

June 13, I know the problem is not hardware. It’s obviously PostgreSQL code being fed by lemmy_serve.r I am dumbfounded. Why aren’t project leaders asking in !postgresql@lemmy.ml using Lemmy platform? I created a community !lemmyperformance@lemmy.ml and posted https://lemmy.ml/post/1237418 about PosgreSQL and developer basics, 101. The scope of the social hazing against he USA media environments is not yet clear to me. “eating your own dog food” as developers, and Matrix Chat as part of the social hazing not fully appreciated by me at that time. Only in retrospection can I see what was going on June 13.

June 15 - again, the project leaders are sending all kinds of social signals that they are going to ignore server crashes as a topic. I opened one of several postings on Lemmy.ml calling out the constant crashes, despite the June 13 hardware upgrade. https://lemmy.ml/comment/948282

June 19

June 19, I am well into a campaign of pleading for developers to install pg_stat_statements and to use Lemmy itself to see the scope of the Lemmy.ml and other servers crashing constantly. https://lemmy.ml/post/1361757

June 30

reddit flocks to Lemmy again as had been happening all month, to find all Lemmy servers constantly crashing as I had personally seen every day since June 25. Ignoring GitHub Issue 2910 for several weeks and avoidng Lemmy !postgresql@lemmy.ml and other communities is no accident, I now see social hazing was the primary project leadership concern.

July 22 - Saturday

Lemmy.ca staff downloads a clone of their own database and runs PostgreSQL EXPLAIN again identifies the TRIGGER logic is a bomb within the code, blowing up servers, and it still hasn’t gotten any attention.

I knowing avoiding the issue on GitHub has been the social hazing game since June 4, but I still make as much noise as posible on GitHub and create a pull rquest labeled “EMERGENCY” on July 23, Sunday.

Monday July 24

The very first priority of the project leaders on GitHub is to edit my pull request title to remove the word “emergency” about the TRIGGER fix. At this point, I have no other explanation of what I have witnessed since May 25 than social hazing on a mythological scale I have never personally witnessed before. No postings are made by developers on Lemmy, and they continue to hang out on Matrix Chat as part of their social hazing rituals.

Friday July 28

Issue 2910 isn’t even mentioned in the release notes of 0.18.3 - that are created today. Sine June 4 I’ve witnessed it be deliberaetly ignored just as I have seen the avoidance of using Lemmy communities like !postgresql@lemmy.ml to discuss how easy it is to notice o lemmy.ml that TRIGGER statements are crashing the server constantly.

It’s still nearly impossible for me to describe the scale of this social hazing against all users of Twitter, Reddit, the World Wide Web / Internet. 1) don’t eat your own dogfood, avoid using Lemmy to discuss Lemmy, 2) avoid GitHub issues and discussions about server crashes and PostgreSQL TRIGGER topis. 3) Say the problem is scaling via hardware, and do avoid admitting that hardware upgrades were the wrong answer on June 13.

THE. MOST. SOPHISTICATED. SOCIAL HAZING. I have ever witnessed. June 4, GitHub Issue 2910. Elon Musk rebranding of “Twitter” to "X’ went on, Reddit API change, and everything possible was done to avoid posting in Lemmy !postgresql@lemmy.ml that CPU was at 100% for PostgreSQL. Wild ride.