• Pennomi
    link
    fedilink
    English
    11 month ago

    What we haven’t hit yet is the point of diminishing returns for model efficiency. Small, locally run models are still progressing rapidly, which means we’re going to see improvements for the everyday person instead of just for corporations with huge GPU clusters.

    That in turn allows more scientists with lower budgets to experiment on LLMs, increasing the chances of the next major innovation.