This is an abstract curiosity. Let’s say I want to use an old laptop to run a LLM AI. I assume I would still need pytorch, transformers, etc. What is the absolute minimum system configuration required to avoid overhead such as schedulers, kernel threads, virtual memory, etc. Are there options to expose the bare metal and use a networked machine to manage overhead? Maybe a way to connect the extra machine as if it is an extra CPU socket or NUMA module? Basically, I want to turn an entire system into a dedicated AI compute module.

  • InvertedParallax
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Actually, simple pinning might be enough if you’re seeing a lot of thrash, but most ml systems have something like openmp to handle that automatically.