Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The increasing availability of AI chatbots is creating concerns for educators.Credit: SeventyFour/Getty
Thanks to the rapid development and evolution of artificial-intelligence (AI) chatbots, students can generate seemingly insightful writing with the click of a button. Although some academics blame these tools for the death of the college essay, a poll of Nature readers suggests that the resulting essays are still easy to flag, and it’s possible to amend existing policies and assignments to address their use.
Nature’s non-scientific, online questionnaire ran from 8 to 21 December and drew 293 self-selected responses, two-thirds of which came from North America and Europe. Through survey questions and open-ended answers, respondents described their first encounters with chatbot essays and what they are doing to address possible misconduct (see ‘Academics weigh in on AI’).
One of the most advanced and accessible chatbots, ChatGPT, was launched last November by OpenAI in San Francisco, California. ChatGPT can mimic natural conversations in response to prompts, including requests for essays or even queries about debugging computer code. The survey affirms that many professors are now considering how students might use — or misuse — AI to complete assignments. Only 20% of respondents have encountered this behaviour in their courses — or witnessed it at their universities — but roughly half expressed concern over the increasing proficiency of chatbots, and expect to come across AI-generated essays in the next year (see ‘Chatbot concerns’).
Royce Novak, a historian at the University of St. Thomas in St Paul, Minnesota, says that relying on chatbots to generate coursework “really only became viable for students this semester” and that he has since received a handful of suspected AI papers in his classes. Without a university-sanctioned method for flagging chatbot essays, however, it is unclear how he can address them with his institution’s ethics committee.
New concerns
Only 10% of respondents said that they or their universities have modified existing policies to address AI chatbots. This could be because many institutions already have sufficiently broad codes of conduct to cover chatbot misuse alongside other forms of misconduct.
Jon May, a professor of psychology at the University of Plymouth, UK, typically runs assignments through the plagiarism-detection program Turnitin before handing them over to student graders. In the past month or so, he has started slipping ChatGPT essays in to determine whether his existing methods can detect the fakes. At present, they’re pretty easy to pick out, May says, but warns that “we’re in an arms race with automation” as AI platforms get better at mimicking human speech. A number of automated tools for detecting chatbot-generated content have been developed, including the GPT-2 Output Detector. In one preprint posted last month1, chatbot-written abstracts were able to fool both humans and software tools. An online plagiarism checker missed 100% of the generated abstracts, and the GPT-2 Output Detector and human readers missed about one-third.
Login or create a free account to read this content
Gain free access to this article, as well as selected content from this journal and more on nature.com