I kinda want to put together a Ceph storage cluster — but I know it can take quite a bit to get good IOPS from ceph — good CPU’s and fast enterprise drives (but I want NVMe), oh and also good networking. But I mainly want to see what I can get in the IOPS department so sequential throughput, I’m not too worried about

How would you guys go about this? Any good hardware choices now that prices of things have come down a good bit in the last year or so?

    • bcredeur97@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      This is a great article but it definitely shows that you shouldn’t expect much

      He’s not even reaching the IOPS of a single drive in his testing :(

      I might have to find something else lol

      • HTTP_404_NotFound@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        I did put the disclaimer front and center! Ceph really needs a ton of hardware before it starts even comparing to normal storage solutions.

        But, the damn reliability is outstanding.

  • user3872465@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    WIth any kind of advanced software you will never be getting drive speed out of a soulution.

    Raw nvmes can hit 800k iops. Add XFS and you may be able to get that.

    with MDADM you get like 200-300k

    ZFS shrink it to 20-50k

    Anything network be glad if you get 5-20k

    The more software is involved the worse performance gets especially for IO. Sequentials often scale better or even linearly, but IO is a PITA

  • Pvt-Snafu@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Hmm, I guess the most IOPs and latency cut will come from a storage protocol use. I mean, with 10GbE and iSCSI or NFS, you might not feel the benefits of NVMe. Especially in terms of latency. And as far as i know, there is no NVMe-oF support yet.