I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

  • MasterNerd
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Look into ollama. It shouldn’t be an issue if you stick to 7b parameter models

    • TheBigBrother@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      4 months ago

      Yeah, I did see something related to what you mentioned and I was quite interested. What about quantized models?

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.

        • TheBigBrother@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          4 months ago

          Interesting information mate, I’m documenting myself into the subject, thx for the help 👍👍

      • MasterNerd
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        I don’t have any experience with them honestly so I can’t help you there