I switched from llamacpp to koboldcpp. Koboldcpp is really really fast because it can use gpu. The problem is that I’m having a hard time to get it to generate long enough outputs.
“write an essay about the history of the moon. It needs to be at least 500 words” for example is a prompt where the same model will give me an output that’s actually that long on llamacpp. Koboldcpp never gives me more than about 70 words per response. Pressing enter to make the ai continue writing or asking it to continue doesn’t work as well in my koboldcpp setup as it does on llamacpp. I’ve set the tokens to generate to 512, the highest number. I’ve set the context tokens to 4096.
What else can I do to try to get longer responses?
I think I might be on to something that contributes to the problem. The built-in “KoboldGPT chat” option puts some example queries in its context memory. They aren’t very long responses so I think it’s just seeing that and using it as a guideline for what to say which results in shorter answers.
If I use the “new chat” option instead of “KoboldGPT chat”, it makes it so that nothing is in the context. No prompt and no memory. This way when I tell it to write 500 words of crap, it doesn’t quite write that much but it’s a lot better than before. Pressing enter to make it generate more text works more often this way too.