But some researchers have warned that generative A.I. tools are so new in schools that there is little evidence of concrete educational benefit — and significant concern about risk.
Chatbots can produce plausible-sounding misinformation, which could mislead students. A recent study by law school professors found that three popular A.I. tools made “significant” errors summarizing a law casebook and posed an “unacceptable risk of harm” to learning.
Outsourcing tasks like research and writing to A.I. chatbots may also hinder critical thinking, a recent study from Microsoft and Carnegie Mellon University found.
“I do think that there is a risk,” said Brad Smith, the president of Microsoft, noting that he frequently cited the critical thinking study to employees. He added that more rigorous academic research on the effects of generative A.I. was needed. “The lesson of social media is don’t dismiss problems or concerns.”