First post in this brand new community! Welcome.

Ghostpad is an interface built on top of a light fork of KoboldAI. You can check out the project repo for more information.

I’ll be sharing updates here as they happen, as well as in the KoboldAI Discord in #ghostpad

This weekend there were a few important backend changes:

  1. Support for Exllamav2. After manually installing exllamav2 into the Kobold environment, it should be detected and available in your list of model backends. The reason I’m not auto-installing it is because I’ve heard this can be quite the project in Windows. My experiences in Linux have been smooth. I hope to make this a more automated process once the build process is simplified.

In Linux, the process should be as simple as cd’ing to your Kobold directory and entering

git clone https://github.com/turboderp/exllamav2.git
sh commandline.sh
pip install ./exllamav2
  1. Support for AutoAWQ. If you follow the latest Llama model releases in Huggingface, you’ve probably noticed a huge number of releases in this format. AutoAWQ is now an auto-installed dependency of koboldai-ghostpad and should not require any additional steps to use. There is a “fuse layers” option which can greatly improve performance, but I’ve encountered random errors when using it, so I recommend leaving it off for now.

  2. Python 3.10 bump. Official KoboldAI is still on Python 3.8, but I’m making the leap so that I can support the type-hinting features used in AutoAWQ. It’s possible that this may lead to some unexpected issues, but it’s been stable for me so far.

I don’t want to make any promises, but one of the features I’m most interested in implementing next is a combination of speech-to-text and text-to-speech, allowing for you to have audio-only conversations with your AI.