Google replied to the tweet directly, saying, “Bard is an early experiment based on Large Language Models and will make mistakes. It is not trained on Gmail data. -JQ”.Much of the follow-on media coverage ran with this response and dutifully “debunked” Bard’s claim that its training data included Gmail data. Few articles professed skepticism around Google’s public denial of Bard’s claim, despite the fact that Google has been fined on numerous occasions by government agencies around the world specifically for making deceptive claims about its privacy practices that are later proven to be misleading.Given this context, the narrative that Bard’s claim was an open-and-shut case of AI hallucination is, at best, hasty and incomplete. A fuller investigation reveals (i) documented use of Gmail data in otherGoogle AI models that makes speculation around its use in Bard reasonable, and (ii) the habitual use of artfully ambiguous language in its public representations around Bard’s data sources, language that never actually rules out the use of Gmail data in its training set.