That’s why anything with human interaction is classed as “limited risk” and comes with transparency requirements, that is, ChatGPT needs to tell you that it’s an LLM.
Then, well, humans are already plenty capable of generating massive volumes of convincing sounding nonsense. Whether you hire ChatGPT or morally flexible Indian call centres doesn’t really make much of a difference.
That’s why anything with human interaction is classed as “limited risk” and comes with transparency requirements, that is, ChatGPT needs to tell you that it’s an LLM.
Then, well, humans are already plenty capable of generating massive volumes of convincing sounding nonsense. Whether you hire ChatGPT or morally flexible Indian call centres doesn’t really make much of a difference.