• redcalcium@c.calciumlabs.com
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    There is also a rumor that said the OpenAI has changed how the model run, now user input is fed into smaller model first, then if the larger model agree with the initial result from the smaller model, then larger model will continue the calculation passed from the smaller model, which supposedly can cut down GPU time.