• floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 months ago

    CUDA is a system for programming GPUs (Graphics Processing Units), and it can be used to do far more computations in parallel than regular CPU programming could. In particular, it’s widely used in AI programming for machine learning. NVIDIA has quite a hold on this industry right now because CUDA has become a de facto standard, and as a result NVIDIA can price its graphics cards very high. Intel and AMD also make powerful GPUs that tend to be cheaper than NVIDIA’s, but they don’t natively support CUDA, which is proprietary to NVIDIA. A translation layer is a piece of software that interprets CUDA commands and translates them into commands for the underlying platform such as an AMD graphics card. So translation layers allow people to run CUDA software, such as machine learning software, on non-NVIDIA systems. NVIDIA has just changed its licence to prohibit this, so anyone using CUDA has to use a natively CUDA-capable machine, which means an NVIDIA one.

    • Varyk@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      Thank you, these are really great entry-level answer s so that I can understand what the heck is going on.