- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
Yes but that only works if we can differentiate that data on a pretty big scale. The only way I can see it working at scale is by having meta data to declare if something is AI generated or not. But then we’re relying on self reporting so a lot of people have to get on board with it and bad actors can poison the data anyway. Another way could be to hire humans to chatter about specific things you want to train it on which could guarantee better data but be quite expensive. Only training on data from before LLMs will turn it into an old people pretty quickly and it will be noticable when it doesn’t know pop culture or modern slang.
Pretty sure this is why they keep training it on books, movies, etc. - it’s already intended to make sense, so it doesn’t need curated.