The Arrival of AI College’s Second Year

When ChatGPT made its debut, it caused quite a stir among the faculty at SUNY Buffalo. Kelly Ahuna, the university’s director of academic integrity, received numerous frantic emails from professors. One English professor even contemplated retirement after witnessing an essay unfold on-screen with errors and weird transitions, yet deserving of a B-minus. The fear of undetectable AI plagiarism loomed over the campus, and Ahuna found herself guiding the faculty through their existential crisis regarding artificial intelligence.

The first year of AI college was marked by chaos and mistrust. Educational institutions, known for their slow pace, failed to provide clear guidance, leaving professors skeptical of grammatically proficient essays. Plagiarism detectors even flagged legitimate work as AI-generated. However, over the summer, some universities and colleges regrouped and decided to embrace AI at an institutional level, incorporating it into the curriculum and helping instructors adapt. Yet, the prevailing norm is still to leave individual educators to fend for themselves, with some believing they can ignore generative AI altogether.

Modernizing higher education is a monumental task. As someone who recently graduated from college, I experienced a resistance to technology. Before the pandemic, professors insisted on printed assignments, discouraging online submissions. Despite ChatGPT being available for most of my senior year, the university administration only sent out one notice about it, urging faculty to understand its implications. However, among students, ChatGPT was a topic of constant conversation. While I don’t know anyone who wrote an entire paper with it, people found other uses for the technology. Some sought practice-exam questions generated by ChatGPT, while others relied on it to explain complex philosophical concepts. It even provided advice on personal matters. Surprisingly, only one of my professors mentioned it, prohibiting its use for coding assignments and relying on the honor system.

As the second year of AI college approaches, some institutions are adopting a less technophobic approach. Public universities and community colleges with diverse student bodies are leading the way, viewing education as a means of social mobility. Examples include Arizona State University, which employs AI bots to offer feedback in an introductory writing course for remote learners. Additionally, the University of Tennessee at Knoxville formed a task force to suggest ways to incorporate generative AI into classrooms, while the University of Florida launched a multimillion-dollar AI initiative. These schools aim to meet employers’ demand for AI-savvy graduates, emphasizing the importance of understanding how to use AI rather than fear being replaced by it.

On the other hand, some universities lack a clear institutional stance on AI. Administrators are cautious about implementing policies that may quickly become outdated. This approach preserves academic autonomy and encourages experimentation, but it also leaves instructors, many of whom are still grappling with basic technology skills, on the forefront of a rapidly advancing field. According to a poll by Educause, 40% of respondents weren’t aware of anyone taking responsibility for decisions regarding generative AI. University officials fear making unpopular choices. As Bryan Alexander from Georgetown University puts it, “A president or provost is thinking, Should I jump on this only to have it become the most unpopular thing in the world?”

Nevertheless, some academics are eager to integrate this alien technology into their classrooms. Ted Underwood from the University of Illinois believes that all students should learn the basics of AI ethics, comparing it to understanding democratic principles. Others see AI as a tool to make instruction more engaging. For instance, the University of Utah’s introductory writing course asks students to compare sonnets written by famous poets and ChatGPT, finding value in using an AI bot to generate purposely flawed poems.

However, there is a faction within academia that views generative AI as a threat. In an era of large language models, written assignments no longer serve as reliable indicators of a student’s understanding. Weekly reading responses and discussion posts lose their significance. Some instructors attempt countermeasures, such as using eye-tracking technology to detect potential cheating. Others hope to preserve the traditional educational landscape by outright prohibiting the use of AI tools. Bryn Mawr College, for instance, considers any use of AI tools as plagiarism. Philosophy professor Darren Hick from Furman University refuses to abandon take-home essays, arguing that in-person exams lack the necessary time for contemplation and profound engagement with philosophical theories.

When discussing generative AI, many professors and administrators draw comparisons to previous waves of technological change. Wikipedia was initially plagued with inaccuracies, but it remains a valuable resource. Calculators didn’t eradicate the need to learn long division, just as microwaves didn’t replace gourmet meals. The most common analogy, though, is centered around the internet. The web didn’t instantly manifest the nightmare scenarios that were predicted. Similarly, Charles Isbell from Georgia Tech is unconcerned about AI-enabled cheating, as errors in a ChatGPT-written essay would expose the deception. Furthermore, the process of fact-checking AI-generated content can enhance students’ understanding of the subject matter. However, just as the internet progressed unexpectedly, AI is likely to disrupt fundamental aspects of higher education. Isbell notes, “It’s perfectly reasonable to hold in your head both thoughts. It’s not going to be the big, destructive force that we think it’s going to be anytime soon. Also, higher education will be completely unrecognizable in 15 years because of this technology. We just don’t really know how.”

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment