Hello all,

I’m going to be adding 50 micro pcs to my homelab, something similar to this: https://i.imgur.com/SvVeHLu.png Each unit is roughly 4.5 in x 4.5 in x 2.5 in and has hardly any weight to it.

I’d like to find a way to rackmount these in the most space efficient manner. For example, having each of these units slotted in from the top/down, similar to this mock-up I made: https://i.imgur.com/zRc4b7G.png

My research so far has shown me this: https://i.imgur.com/AWznyB5.png which is a simple rackmount on a slider that is metal. I imagine I could maybe build some sort of support framework on top of it to handle sliding things in, albeit I am not entirely sure how. Maybe I could 3d print something.

Would anybody have any ideas on how I could build a server rack that would support something like this, ideally something that is on slide-out bearings?

Note: I have a pretty healthy budget to buy whatever or modify whatever, so that should help option some options.

Thanks in advance!

  • LAKnerd@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Why not VMs? Dell poweredge fx2s with 4 x fc630 or any other multi-node server will not only give you density but also the ability to scale out with the option of just turning off one or more of the nodes. There’s also the Hyve Zeus 1u servers that run pretty quiet, and can also scale out depending on how many you have turned on. They’re absolutely no-frills, only has room for two 2.5" drives, but it’s supermicro-based so there’s plenty of documentation.

    • StartupTim@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Why not VMs? Dell poweredge fx2s with 4 x fc630

      It is a fair question! The price/performance metric for what I am building far exceeds what you’re looking at.

      For example, with my setup for $23k, I’ll have 700 CPU Cores and 1.6TB of memory, in addition to (not needed) 25TB of NVMe storage, as well as decent amount of clustered GPU compute, albeit not the goal.

      700 fast processing cores for $23k is just not possible using server architecture at this time.

      • LAKnerd@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I just happen to have a spreadsheet that covers compute density for all models of dell, HPE, and supermicro…

        Your cost/density sweet spot is going to be the 2u/4 node platform from Dell or supermicro that use the xeon e5-2600v4 or scalable v2. There’s a wide selection on eBay for this stuff, so it’s definitely available. At 16 cores/CPU, 128 cores/2u unit, you’ll need 6 units.

        For the dell fx2s with 4 x fx630 nodes, 256gb memory/node, 32 cores/node, I spec’d one out for $2400 before storage and GPU

    • StartupTim@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Dell poweredge fx2s with 4 x fc630

      From a cost to performance perspective, I don’t believe the solution you mentioned would be very attractive. For example, what I am looking to build, would be 700 pretty fast physical CPU cores (4.4ghz) @ $23k (and more cores/speed, for incremental price increase). I haven’t found any server solution, used or new, which can compare to that in raw processing power.

  • 100GHz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Is the primary air intake from the front of the box?

    At 50 boxes, even if they heat up for example 20W each, you are still looking at 2kW equivalent hit air that needs to get removed.

    That being said, id stack them in a manner where airflow is the primary concern here.

    If air is from to back in that box, stack tight like a brick wall, leave no space around, and put enough fans from the back to pull all that heat out.

    If the box expects air from the sides too, well I guess you’ll end up spacing them and having even more fans to pull the less efficient cooled air out.

    You want to measure the temps there as you are building/running this.

    It’s consumer boxes too, you really want to buy a fire extinguisher too, they are very cheap nowadays.

  • updown_side_by_side@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    50 units is a lot, I would consider not putting them in the rack and build a custom rack with aluminum profiles, such as 2020, 3030 or 4040.

    • StartupTim@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      build a custom rack with aluminum profiles, such as 2020, 3030 or 4040.

      Could you go on about what you mean by 2020, 3030, 4040 etc? Would definitely like to hear more, thanks!

  • cruzaderNO@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Strange setup for AI with the specs/chips usualy in the nucs/minis.

    But unless you plan to strip the pcbs out of the case etc type stuff they will not be happy running very dense.

    If you got a healthy budget id expect you to look towards the stuff made for rack and the usecase.

    • StartupTim@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve already done testing and hardware validation, so we’re beyond that point. Simply put, the mini PC hardware just has better price/performance than any server counterpart that I’ve seen or tested as of yet.

      All my testing shows that, from a price and performance perspective, this would be the route to go. So at this point, I’m just looking for the rackmount solution.

      • cruzaderNO@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        So at this point, I’m just looking for the rackmount solution.

        And multiple has been posted, so problem solved then i guess.