https://github.com/oobabooga/text-generation-webui/commit/9f40032d32165773337e6a6c60de39d3f3beb77d

ExLlama is an extremely optimized GPTQ backend for LLaMA models. It features much lower VRAM usage and much higher speeds due to not relying on unoptimized transformers code.

It is highly optimized model loader for GPTQ models. It’s an alternative to options like AutoGPTQ or GPTQ-for-LLaMA, and provides faster text generation speeds.

With this update, anyone running GGML exclusively might find some interesting results switching over to a quantized model and testing the changes. I haven’t had a chance yet myself, but I will post some of my own benchmarks and results if I find the time for it.

I for one am excited to see the efficiency battles begin. Getting compute down is going to be the most important hurdles to overcome.

  • Blaed@lemmy.worldOPM
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    A comment from a Reddit user (Fuzzlewhumper) regarding these changes:

    What would take me 2-3 minutes of wait time for a GGML 30B model takes 6-8 seconds pause followed by super fast text from the model - 6-8 tokens a second at least. Faster than I normally type. Yup, had it describe the characters, big old paragraph, 7.41 tokens on my 2015 machine with 32gb memory, I7-6700, and a couple cheap 3060 RTX cards. SCORE.

    I would be curious to see if the efficiency change is that drastic. I will do my best to include my findings in the larger model benchmark post I am piecing together.

  • ArkyonVeil@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I’m actually playing around with EXLlama, IIRC it works with pretty much every model, and it can be a real game changer specially for long conversations, code, or stories.

    Unfortunately there is still the unavoidable problem of the context length burning VRAM like no tomorrow. You either get a decent AI with the attention span of a gold fish or an idiot AI which can remember 3 times as much stuff as before.

    Handy, progress, but ultimately there is still ground to cover.

    • Blaed@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      I keep hearing about this EXLama! I really got to try it. Glad to hear it’s going well for you.

      I think it’s only a matter of time until context length is no longer an issue. I’m curious to see how RWKV develops, its infinite context length is interesting.

      I hope they make some major breakthroughs, I like the idea of a super massive RNN, but a transformer with infinite context length could be a game changer for both architectures.

      • ArkyonVeil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It would be absolutely awesome, with infinite context length that would mean a much greater ease when it comes to handling models. I can be lazy and instead of creating a LORA, just use an entire book’s style as a reference right there in the prompt.

        For programmers, just dump the entire codebase, or Documentation.

        Of course, all this is only possible if VRAM is less of a bottleneck than it currently is, as well as the fact that it can reliably reference information on an arbitrarily large context. (Not much use having huge context if performance degrades, it loses its marbles or forgets key pieces of information along the way)

        • Blaed@lemmy.worldOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I’m with you there. I love how Mosaic just fed the entire Great Gatsby to StoryWriter. This is the sort of context length I need in my life. Would make my projects so much easier. I don’t think we’re too far from having it on consumer hardware.

          You should check out my latest post - which ironically addresses parts of your first comment, but you still need a lot of VRAM… 6000+ tokens context is now possible with ExLlama.

          It’s crazy to see how fast these developments are happening!