I know that Lemmy is open source and it can only get better from here on out, but I do wonder if any experts can weigh in whether the foundation is well written? Or are we building on top of 4 years worth of tech debt?

  • boonhet
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I suppose that’s completely fair - async workers tend to fare well as standalone services and are often split off even in monoliths. But I guess what I’m saying is that splitting it might not actually win you THAT much compared to just scaling the whole thing. Not until we’re talking like 100 runners to 1 API instance or something. It gives you a bit of additional flexibility, but won’t necessarily be a huge difference in total resource cost, is what I’m saying. But it is still a good idea because it results in cleaner code and, as outlined before, tinier docker images.

    Also the thing about Google Cloud Run is that it’s probably not a good idea for many instance owners. Autoscaling can lead to unexpected costs if set up by an amateur. But that’s an unrelated can of worms.

    • myersguy@lemmy.simpl.website
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Autoscaling can lead to unexpected costs if set up by an amateur. But that’s an unrelated can of worms.

      Agreed (along with all things cloud!). That said, Cloud Run does a good job of letting you define how many connections each container can handle, and a max number of containers (and min) to scale to.