Are there any Discord servers or somewhere in the Matrix to chat about hosting a Lemmy instance? I’ve got Lemmy running, but I think there are several of us in the same boat struggling with federation performance issues and it might be good to have some place to chat real time.

  • NotoriousOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    My server is struggling with federation. Pretty much everything I see in the logs with debug turned on is this:

    2023-06-20T01:55:28.018419Z WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Header is expired

    • xebix@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This is exactly what I am seeing. I just tried upping federation_worker_count in the postgres database. I saw someone in another thread mention trying that so we’ll see.

    • Jamie@jamie.moe
      cake
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Upping worker count significantly reduced those in my case. If Lemmy is maxing out your CPU too much though by chance, you may need to upgrade.

    • Slashzero@hakbox.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There is an nginx setting you can tune as well. I believe it was worker threads? Can’t remember the exact one and too tired to ssh into my instance to check.

      • NotoriousOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        This post says that the worker threads only effect outbound federation. I’m struggling with my instance not receiving anything inbound.

  • falcon15500@lemmy.nine-hells.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    From the docs / troubleshooting:

    “Also ensure that the time is accurately set on your server. Activities are signed with a timestamp, and will be discarded if it is off by more than 10 seconds.”

    • NotoriousOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks for pointing this out. I got hopeful that it may be a simple fix, but unfortunately NTP is set up and synchronized.

  • Jason@lemmy.weiser.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Yeah, I’ve been selfhosting for nearly a decade and setting up lemmy was, surprisingly, a challenge, and not because it was all that difficult but because the documentation was contradictory, out-of-date, or non-existent in key areas. Federation is my current hurdle, too. It would be nice to have a place to compare notes. Maybe here?

  • jmay@feddiverse.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I would be up for something like this. I host my own 8nstance as well. I’m having issues updating communities though. Every time I try I get the button spinner of death. I think in the end, the software is buggy and needs some time to get the bugs worked out, but it is frustrating.

  • HTTP_404_NotFound@lemmyonline.com
    cake
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Honestly- a lot of the performance issues aren’t due to OUR servers- but, the upstream servers.

    beehaw.org, lemmy.world, for example- I think their servers are completely overloaded, and are having issues keeping up.

    I don’t have sync issues for the smaller/other servers at all. Just the big ones.

    I have 128G of ram, 32 cores dedicated. I have the federation worker count set at 256. There is NO shortage of resources, and my server sits more or less, idle.

    Due to this only really impacting those larger instances- I believe the blame may lie there.

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I think it is less about pointing fingers as to who’s at blame, and trying to see if there are things we can do to resolve/alleviate that.

      I recall reading somewhere that @Ruud@lemmy.world mentioned before that the server is scaled all the way up to a fairly beefy dedicated server already, perhaps it is soon time to scale this service horizontally across multiple servers. If nothing else, I think a lot of value could be gained by moving the database to a separate server than the UI/backend server as a first step, which shouldn’t take too much effort (other than recurring $ and a bit of admin time) even with the current Lemmy code base/deployment workflow…

      • HTTP_404_NotFound@lemmyonline.com
        cake
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Well- I do know- most of the components do scale.

        The UI/Frontend, for example, you can run multiple instances easily.

        The API/MiddleTier, I don’t know if it supports horizontal scaling though. But, a beefy server can push a TON of traffic.

        The database/backend, being postgres, does support some horizontal scaling.

        Regarding the app itself, it scales much better if EVERYONE didn’t just flock to lemmy.ml, lemmy.world, and beehaw.org. I think that is one of the huge issues… everyone wanted to join the “big” instance.

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          If you look here: https://lemmy.world/comment/65982

          At least specs and capacity wise, it doesn’t suggest it is hitting a wall.

          The more I dug into things, the more I think the limitation comes from an age old issue in that if your service is expected to connect to a lot of flakey destinations, you’re not going to be in for a good time. I think the big instance backend is trying to send federation event messages, and a bunch of smaller federated destinations have shuttered (because they’re not getting all the messages, so they just go and sign up on the big instances to see everything), which results in the big instances’ out going connection have to wait for timeout and/or discover the recipient is no longer available, which results in a backed up queue of messages to send out.

          When I posted a reply to myself on lemmy.world, it took 17 seconds to reach my instance (hosted in a data centre w/ sub 200ms ping to lemmy.world itself, so not a network latency issue here), which exceeds the 10 seconds limit per defined by Lemmy. Increasing it on the application protocol level won’t help, because as more small instances come up, they too would also like to subscribe to big hubs, which will just further exacerbate the lag.

          I think the current implementation is very naive and can scale a bit, but will likely be insufficient as the fediverse grows, not as the individual instance’s user grows. That is, the bottle neck will not so much be “this can support instance up to 100K users” but rather “now that there’s 100K users, we’d also have 50K servers trying to federate with us”. And to work around that, you’re going to need a lot more than Postgres horizontal scaling… you’d need message buses and workers that can ensure jobs (i.e.: outward federation) can be sent effectively.

  • useful_idiot@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I was able to adapt the docker compose manifest into a nomad job(yay high availability), but I am really struggling with federation. I have a domain/proper ssl certificate, accessible remotely everything seems OK, but when I try to subscribe to other instances, I get an initial load of posts, then it’s just stuck in subscribe pending. Any time I try to subscribe I see this log message which isn’t exactly helpful about what to do about it…

    ‘ 2023-06-19T20:11:18.426743Z INFO Worker{worker.id=06aa9ebe-1cab-42fb-ac4b-54bbe7954ba2 worker.queue=default worker.operation.id=fe75d47d-f50d-43d6-921f-795aa50a1b68 worker.operation.name=process}:Job{execution_id=83235752-79dd-4e42-a6f5-d6e32c2e95a9 job.id=ed8bcdbd-4e78-464e-9ae0-871f3d79fd92 job.name=SendActivityTask}: activitypub_federation::core::activity_queue: Target server https://lemmy.ca/inbox rejected https://lemmy.my-domain-redacted.ca/activities/follow/c4b74591-767e-42a0-a160-5023e67c77aa, aborting’