AI and fact-checking: when probability replaces evidence

Tal Hagin: 

Some models, like standard generative LLMs (for example, basic ChatGPT), rely solely on patterns learned during training to generate responses. While they are often used by people to verify information, they do not access external sources in real time and cannot verify facts, producing instead what is statistically likely given their training data.

Other systems, often called retrieval-augmented models (for example, Grok, Perplexity, Gemini), combine generation with live data retrieval. When asked a question, these models can fetch relevant documents, news posts, or web content, and then generate answers conditioned on that retrieved information. This allows them to provide citations and reference recent events. However, even retrieval-augmented systems still do not independently verify the accuracy of the sources, as they assume the retrieved material is reliable.

These systems appear alive, responsive, knowledgeable, and, perhaps most importantly, impartial. In a world of eroded trust, this feels refreshing. Users often treat AI outputs as neutral, objective, and infallible. As a result, large language models such as ChatGPT, Grok, and Gemini have effectively become the digital public’s latest fact-checkers; responding instantly, in confident, well-structured paragraphs that appear authoritative in ways journalists or fact-checkers rarely can.


Fast Lane Literacy by sedso