Exactly, and all of this is a simple matter of having multiple models trained on different instances of the entire public internet and determining whether their outputs contradict each other or a web search.
I wonder how they prevented search engine results from contradicting data found through web search before LLMs became a thing?
No shit. Maybe they should just get rid of the extra bullshit generator and serve the sources instead of piling more LLM on the problem that only exists because of it.