I used to teach students, but now I catch ChatGPT cheats.

Troy Jollimore:

Not very sophisticated AI-generated papers are easy to spot—if, again, you know what to look for. In terms of both syntax and tone, they all sound roughly the same. Very neutral, very bland. Regardless of the nature of the question, they address the issues at hand in the same highly methodical manner, first developing a systematic framework, then balancing competing considerations against each other to arrive at an overall judgment. Sometimes they will provide quotations, giving page numbers that, as often as not, do not seem to correspond to anything in the actual world. These, again, are the ones that are easiest to pick out. I have little doubt that there are others that are more sophisticated, and that some get past me.

To judge by the number of papers I read last semester that were clearly AI generated, a lot of students are enthusiastic about this latest innovation. It turns out, too, this enthusiasm is hardly dampened by, say, a clear statement in one’s syllabus prohibiting the use of AI. Or by frequent reminders of this policy, accompanied by heartfelt pleas that students author the work they submit. 

North American college instructors are accustomed to adversity. Our society has always manifested strongly anti-intellectual tendencies. (In the past, this was perhaps more true in the US than Canada; I’m not sure how much, if at all, this remains the case.) One mainstream view is that practical intelligence, or street smarts, constitutes a more valuable form of intelligence. Those who emphasize theory, study, and scholarship are often viewed as marooned in ivory towers with expertise that is mostly, if not entirely, spurious.


e = get, head

Dive into said