A recent study published in Scientific Reports suggests that ChatGPT’s performance in responding to examination questions in various disciplines such as computer science, politics, engineering, and psychology, may be comparable to, or even surpass, the average university student’s performance. Additionally, the study revealed that nearly 75% of the students surveyed expressed willingness to use ChatGPT for assistance with their assignments, notwithstanding the perception of many educators that utilizing ChatGPT amounts to plagiarism.
To investigate how ChatGPT performed when writing university assessments compared to students, Talal Rahwan and Yasir Zaki invited faculty members who taught different courses at New York University Abu Dhabi (NYUAD) to provide three student submissions each for ten assessment questions that they had set.
ChatGPT was then asked to produce three sets of answers to the ten questions, which were then assessed alongside student-written answers by three graders (who were unaware of the source of the answers). The ChatGPT-generated answers achieved a similar or higher average grade than students in 9 of 32 courses. Only mathematics and economics courses saw students consistently outperform ChatGPT. ChatGPT outperformed students most markedly in the ‘Introduction to Public Policy’ course, where its average grade was 9.56 compared to 4.39 for students.