To limit hallucinations, you need a search engine, feed its result to your LLM to summarize, then have a second AI model judging the summurization to detect and reject hallucinations.
So yeah, really inefficient currently. As if LLMs were stochastic parrots inadequate for this task...
#OhWait