INFORMATION TECHNOLOGY

New tool recognizes AI “ghostwriting”


The debut of the artificial intelligence (AI) chatbot ChatGPT has caused a lot of buzz around the world because of its powerful text processing and conversational capabilities. Still, there are many clues that can help people distinguish between robots and humans. 

American scientists have developed a tool that can identify AI-generated academic texts with more than 99% accuracy. The related research was recently published in Cell Reportage Physics.

“We strive to create an easy-to-use method that solves the problem of discerning AI writing. In this way, high school students can also build an artificial intelligence detector for text. Lead author Heather Desaire, a professor at the University of Kansas, said.

“At the moment, there are some obvious problems with AI writing.” “One of the biggest problems is that it brings together text from many sources without any form of accuracy check,” Desaire said. ”

Although many AI text detectors are available online and perform quite well, they are not built specifically for academic writing. To fill this gap, the team wanted to build an academic writing detection tool with better performance. They focus on opinion essays — summaries written by scientists on specific research topics. The team selected 64 topics and created 128 ChatGPT-generated articles on the same research topic to train the model. When they compared the articles, they found one indicator of AI writing — predictability.

In contrast to artificial intelligence, the paragraph structure of human writing is more complex, the number of sentences and the total number of words in each paragraph are different, and the sentence length is also unstable. In addition, preference for punctuation and vocabulary is also a clue. For example, scientists tend to use words like “however,” “but,” and “although,” while ChatGPT often uses “others” and “researchers” in their writing. Ultimately, the team listed 20 metrics for the new model.

After testing, the model achieved 100% accuracy in distinguishing between human and AI authors for the entire opinion article. For identifying a single paragraph, the model is 92% accurate. The model far surpasses existing AI text detectors on the market.

Next, the researchers plan to determine the scope of application of the model. They wanted to test it on a wider dataset and different types of academic writing. As AI chatbots continue to evolve, researchers also want to know if the model can keep up.

“When people hear about this study, they probably first think ‘Can I use it to tell if my student actually wrote a paper?'” Desaire said. Although the model is highly skilled at distinguishing between AI and human authors, Desaire says it wasn’t designed to distinguish AI-generated student papers for educators. Still, she notes that people can easily replicate their approach to building new models for their own purposes. (Source: China Science News Feng Lifei)

?

American scientists have designed a tool to identify clues of ChatGPT “ghostwriting”. Photo by Heather Desaire

Related paper information:http://doi.org/10.1016/j.xcrp.2023.101426



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button