Zed Clampet
Community Contributor
@BeardyHat @Pifanjr zedllucination correction: I think I was specifically talking to you to when I had my zedllucination. I said that I thought the reason for LLM hallucinations was that there was a conflict between what they were designed to do and how they are being used. I said that they were designed to run simulations.
Well, actually: Earlier AI had been designed for simulations, but, according to CoPilot, the purpose of the LLM's was to create an Artificial General Intelligence. Even as of only a year or so ago, the AI folks believed that if they just kept feeding LLM's training data that they would eventually be AGIs. All hope for that has been lost.
So why do hallucinations occur? Don't know. All I have is the traditional answer, which everyone knows, which is that they don't know anything. They are just selecting the next language token based on the internal logic they developed while training. I have problems with this explanation, but the people who espouse this actually know what they are talking about, so I'll keep my theory to myself rather than switching on Auto-Idiot (TM) and typing away as usual.
Well, actually: Earlier AI had been designed for simulations, but, according to CoPilot, the purpose of the LLM's was to create an Artificial General Intelligence. Even as of only a year or so ago, the AI folks believed that if they just kept feeding LLM's training data that they would eventually be AGIs. All hope for that has been lost.
So why do hallucinations occur? Don't know. All I have is the traditional answer, which everyone knows, which is that they don't know anything. They are just selecting the next language token based on the internal logic they developed while training. I have problems with this explanation, but the people who espouse this actually know what they are talking about, so I'll keep my theory to myself rather than switching on Auto-Idiot (TM) and typing away as usual.