ChatGPT Is Coming For Faculty Work

Ben Chrisinger:

Almost immediately after OpenAI released ChatGPT in late November, people began wondering what it would mean for teaching and learning. A widely read piece in The Atlantic that provided one of the first looks at the tool’s ability to put together high-quality writing concluded that it would kill the student essay. Since then, academics everywhere have done their own experimenting with the technology — and weighed in on what to do about it. Some have banned students from using it, while others have offered tips on how to create essay assignments that are AI-proof. Many have suggested that we embrace the technology and incorporate it into the classroom.

While we’ve been busy worrying about what ChatGPT could mean for students, we haven’t devoted nearly as much attention to what it could mean for academics themselves. And it could mean a lot. Critically, academics disagree on exactly how AI can and should be used. And with the rapidly improving technology at our doorstep, we have little time to deliberate.

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture.

Even this limited use is complicated. Different audiences — journal editors, grant panels, conference attendees, students — will have different expectations about originality for particular tasks. For example, while peer reviewers might accept translated statistical code, students might balk at AI-generated lecture slides.

But it’s in the realm of academic writing and research where ethical debates about transparency and fairness really come into play.