I’m really enjoying lemmy. I think we’ve got some growing pains in UI/UX and we’re missing some key features (like community migration and actual redundancy). But how are we going to collectively pay for this? I saw an (unverified) post that Reddit received 400M dollars from ads last year. Lemmy isn’t going to be free. Can someone with actual server experience chime in with some back of the napkin math on how expensive it would be if everyone migrated from Reddit?

  • eekrano@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Is there an approximate specs per number of users guide to size a lemmy instance?

    • mer@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Yeah, I’d love to get an approximate sense of how much these instances cost

    • kinther@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I haven’t seen one yet. Disk usage this morning on lemmy.world was reported at about 4GB over 11 days (probably low usage). The 100GB drive would probably fill up in 275 days or so if usage did not increment. If it’s not redundant and dies, all that content is lost.

      So storage will be a huge issue for lemmy unless I’m missing something.

      • nwithan8@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        100GB is practically nothing nowadays.

        There are people (myself included, not to brag) running home servers with literally hundreds of terabytes of data. At that ~0.3 GB/day number, I alone could host 3,500 years worth of data. Get some of those r/DataHoarders and r/HomeLab guys on here and Lemmy would never run out of space.

        • aaron
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Considering Lemmy is absolutely tiny compared to Reddit, these aren’t numbers worth considering. Every single instance needs to mirror data, and I still don’t understand how this is supposed to scale to something a fraction the size of Reddit unless the federation is just a few enormous instances that can afford that scale. It’s not like everyone pitches in what they can – every single instance individually needs to be able to support the entire dataset and the associated synchronization traffic (for the portion that its users have requested access to).

          • bizzwell@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            What if there are a couple large archive mirrors and the posts on other servers have a life expectancy maybe based on time, but also engagement? Crucial posts could be stickied, but I don’t see the need for everyone to hold onto everything forever. Even in the event of a catastrophic loss of the archives, the communities could still live on and rebuild.

            • aaron
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Yeah, that’s a good point. I imagine there’d have to be some compromise like that for smaller instances. How often do users load up content older than say a couple of weeks or a month? Could be a hinderance on the experience, hard for me to estimate (for myself) how often that happens.