nayminlwin@lemmy.ml to Linux@lemmy.ml · 11 months agoAny experience with teaching kids Linux?message-squaremessage-square105fedilinkarrow-up1158arrow-down14file-text
arrow-up1154arrow-down1message-squareAny experience with teaching kids Linux?nayminlwin@lemmy.ml to Linux@lemmy.ml · 11 months agomessage-square105fedilinkfile-text
minus-squarewebghost0101@sopuli.xyzlinkfedilinkarrow-up1·11 months agoCan you tell me something about what card you used to run what llm? What is its performance? There is so little out there about this.
minus-squareProperlyProperTea@lemmy.mllinkfedilinkarrow-up1·11 months agoI have an RX6800XT and I use KoboldCPP to run models I download off of Huggingface. I’m not sure how many tokens per second it generates, probably about 10? If you want to try it yourself here’s a link to the Github page: https://github.com/LostRuins/koboldcpp
Can you tell me something about what card you used to run what llm? What is its performance?
There is so little out there about this.
I have an RX6800XT and I use KoboldCPP to run models I download off of Huggingface.
I’m not sure how many tokens per second it generates, probably about 10?
If you want to try it yourself here’s a link to the Github page: https://github.com/LostRuins/koboldcpp