“One of the dangers of ChatGPT and similar AIs is that, for now, they are wildly inaccurate when it comes to specific details”

Saul Costa:

They are constantly “hallucinating” alternative versions of reality, and then passing that on in a very convincing manner for the user to absorb. It is an unfortunate limitation, but one that can be mitigated through careful use of the AIs.

Using these AIs effectively in educational settings depends largely on the ability of the learner to know what statements to trust and which to validate. In my experience, all quantifiable data should be regarded as inaccurate by default. Names should be verified when they are a crucial part of the narrative being explored. Concepts are the most trustworthy because they stem directly from what the AIs do best: finding the connections between things.

As odd as it may sound, in the experience I am about to describe, the inaccurate details provided by ChatGPT do not matter. I intended to use the AI to learn how to approach answering my question by observing how it did, not to get concrete answers. I disregarded the bulk of the details because they are simply stand-ins that will be verified and updated later when I go to apply what I have learned.

Here is how it went.