Hallucinations, like depression, is a multifaceted issue. Training data is only a piece of it. Quantized models, overfitted training models rely on memory at the cost of obviously correct training data. Poorly structured Inferences can confuse a model.
I guess my point was that locking all that knowledge and troubleshooting behind chat interfaces, and obscuring it from search engines makes the internet worse