thinking about ai
I’m still not sure how to think about AI. While some aspects of it seem useful, I’m not sure I care about them. The few times I’ve tried it out on topics of interest to me, using both ChatGPT and Perplexity it’s failed.
And there have also been failures on tests that I didn’t mean to run. Last week, during the Illinois-Northwestern football game, my sons and I were wondering whether a Northwestern receiver, Calvin Johnson, was related to the former Detroit Lions receiver of the same name (but who is probably better remembered for his nickname, Megatron). My older son pulled out his phone and Googled. The Gemini answer, which appeared above the links, said he was Megatron’s son, but the very first line of one of the top links said
He may not be related to Megatron, but Northwestern will welcome this Calvin Johnson to Evanston with open arms.
More disturbing than obvious outright errors like that is the possibility that using AI will affect our ability to judge its value. I’m thinking of something that came up in a recent episode of The Talk Show, the one with Joanna Stern. Starting about 53 minutes into the show, they start talking about they both asked ChatGPT to make an image of what it thinks their life looks like. Joanna tried it twice, and you can see the images by following links in the show notes. Prominent in both images were representations of scouting.