Lugh@futurology.todayM to Futurology@futurology.todayEnglish · 1 year agoResearchers outline an easy way for LLM's to erase copyright material from their training data, without completely retraining them.arxiv.orgexternal-linkmessage-square1fedilinkarrow-up122arrow-down11
arrow-up121arrow-down1external-linkResearchers outline an easy way for LLM's to erase copyright material from their training data, without completely retraining them.arxiv.orgLugh@futurology.todayM to Futurology@futurology.todayEnglish · 1 year agomessage-square1fedilink
minus-squareSchizoDenjilinkfedilinkEnglisharrow-up1·1 year agoA lot of LLMs are already out there in open source for anyone to embed or reinforce themselves, so whoever wants to shackle their own llm and lose ground, they are free to do so but they won’t.
A lot of LLMs are already out there in open source for anyone to embed or reinforce themselves, so whoever wants to shackle their own llm and lose ground, they are free to do so but they won’t.