The release of ChatGPT, an AI language model created by OpenAI in November 2022, garnered over a million users in just five days. However, its popularity among college students has led to concerns about a possible increase in academic dishonesty.
AI generators like ChatGPT are rapidly gaining popularity among college students because they are often free and produce work faster than the average student. The current generation of the software can pull information off the internet up until 2021 and generate entire papers or answer in-depth questions in minutes.
Robert Colter, UW professor of Philosophy and Religious Studies, highlights the need for academic policies to guide the use of AI technologies. He questions where the line should be drawn regarding the use of AI.
“The lines are easily blurred, but academic policies remain a guide for the changing times,” said Colter. “If we do allow technology, where does the line stop? Is Grammarly or even spellcheck in Word considered AI?”
“A lot of people want to talk about academic honesty policies. Some versions of UW cheating policies require that whatever you turn in to be your own work, it would be a violation of that to use AI.”
While ChatGPT has quickly become one of the most well-known AI systems, there are multiple versions of artificial intelligence available to the public today including image and voice generators
Many professors are frustrated with the rise in cheating since the introduction of AI tools that simulate human-like conversations. They worry that these tools might act as shortcuts and hinder students from learning the necessary skills for their fields.
“If I am putting out a product generated by my potential replacement [AI], you aren’t able to demonstrate your capabilities in your career,” said Jonathan Brant, UW professor of Civil and Architectural Engineering.
“The way I view all of those different technologies, while they are great tools, they also hinder you from learning,” said Brant.
Despite AI’s efficiency, some can still detect work not written by a human. After creating a side by side comparison of a paragraph written by a person versus one generated by ChatGPT, two out of three professors interviewed were able to detect which one was AI and which one was human. They attributed the difference to the software’s lack of personable communication.
Since not all professors are able to identify generated papers versus student writing, actions to detect cheating are being put in place. Universities across the United States have formed committees to make leaders aware of these AI systems, including University of Wyoming.
While AI may have potential future applications in education, some instructors stress the importance of students investing time and effort into learning for their long-term success.
Katie Li-Oakey, a professor of Chemical Engineering, believes that ChatGPT is not yet useful for solving complex problems, such as those within her discipline.
“I currently do not think ChatGPT will be useful to solve complex engineering problems. In my discipline, many problems need to be solved using empirical charts and equations. ChatGPT would need to have many charts and equations inputted before it could be useful for us at some level,” said Li-Oakey.