InternetPirate@lemmy.fmhy.ml to Singularity | Artificial Intelligence (ai), Technology & Futurology@lemmy.fmhy.mlEnglish · edit-21 year agoMicrosoft LongNet: One BILLION Tokens LLM — David Shapiro ~ AI (06.07.2023)youtube.comexternal-linkmessage-square8fedilinkarrow-up18arrow-down11file-textcross-posted to: models@lemmy.intai.tech
arrow-up17arrow-down1external-linkMicrosoft LongNet: One BILLION Tokens LLM — David Shapiro ~ AI (06.07.2023)youtube.comInternetPirate@lemmy.fmhy.ml to Singularity | Artificial Intelligence (ai), Technology & Futurology@lemmy.fmhy.mlEnglish · edit-21 year agomessage-square8fedilinkfile-textcross-posted to: models@lemmy.intai.tech
minus-squareMartineski@lemmy.fmhy.mlMlinkfedilinkEnglisharrow-up6·1 year ago We could have AI models in a couple years that hold the entire internet in their context window. That’s a really bold claim.
minus-squareBehohippy@lemmy.worldlinkfedilinkEnglisharrow-up4·1 year agoAlso not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
minus-squareMartineski@lemmy.fmhy.mlMlinkfedilinkEnglisharrow-up3·1 year agoYeah, long term memory where ai can access only what it needs/wants is the way.
minus-squareLuovahulluus@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-21 year agoFor now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.
That’s a really bold claim.
Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
Yeah, long term memory where ai can access only what it needs/wants is the way.
For now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.