You Have No Idea How Much We’re Using ChatGPT.

Owen Kichizo Terry

Look at any high school or college academic integrity policy, and you’ll find the same message: submit work that reflects your own thinking, or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naive.

There’s a remarkable disconnect between how those with influence over education systems –– teachers, professors, administrators –– think students use generative AI on written work and how we actually use it. As a student, the assumption I’ve encountered among authority figures is that if an essay is written with the help of ChatGPT, there will be some sort of evidence –– the software has a distinctive “voice,” it can’t make very complex arguments (yet), and there are programs that claim to detect AI output. This is a dangerous misconception. In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own. Once this becomes clear, it follows that massive structural change is needed if our schools are going to keep training students to think critically.