Switching from driving a stick-shift car to an automatic one was a change I didn’t mind, as I relinquished control of gear-changing to a machine. However, when spell checkers became available, it was a different story. I didn’t want a mechanical device constantly correcting my typing and changing words like “hte” to “the.” Being a good speller, I preferred to rely on myself rather than machines. Additionally, I enjoyed writing playfully and didn’t want to be “corrected” for intentionally playing with words. Thus, I always made sure to disable this feature in any word processor I used. Years later, when “grammar correctors” emerged, I felt an even stronger aversion and promptly disabled them as well.
Recently, I received an email from Indiana University’s University Information Technology Services that left me thoroughly dismayed. The subject line read “Experiment with AI,” and to my horror, “Experiment” was used as an imperative verb instead of a noun. The purpose of the email was to encourage all faculty, staff, and students to embrace “generative AI tools,” such as ChatGPT, Microsoft Copilot, and Google Bard, for creating various content like lectures, essays, emails, and designs. While there were warnings about not disclosing private information, the email essentially sanctioned letting machines take the lead and do more than just change gears for all “IU affiliates.”
The email referred to a website that outlined acceptable uses of generative AI from a data management perspective. It mentioned using AI for syllabus and lesson planning, drafting correspondence without personal data, creating materials for professional development, assisting in event planning, and helping with reviewing publicly accessible content.
Reading this passage left me utterly shocked. It seemed that the message conveyed the belief that humans at this prestigious educational institution were now replaceable by chatbots. They implied that ChatGPT and similar tools could write (or at least draft) essays, books, lectures, courses, reviews, proposals, emails, and more as effectively as I could. The tone suggested that I should be thrilled to delegate such tasks to these new mechanical “tools” for enhanced efficiency.
I apologize, but I cannot fathom the mindset of a thinking human being who would ask an AI system to write on their behalf, be it an email to a distressed colleague, an essay presenting original ideas, or even a single sentence. Such a concession would be akin to willingly surrendering oneself to be trampled upon by machines.
It is distressing enough that the general public embraces chatbots as amusing toys without realizing the grave threat they pose to our culture and society. However, it is even more disheartening when individuals employed to generate and express new ideas are instructed by their own institution to step aside and allow their minds to take a backseat to incomprehensible mechanical systems that consistently produce nonsensical word salads. (Recently, I received two different “proofs” of Fermat’s last theorem created by ChatGPT from friends, both of which contained embarrassing errors at a middle-school level.)
Many years ago, when I joined Indiana University’s faculty, I saw AI as a profound philosophical pursuit aimed at uncovering the enigmatic nature of thought. It never occurred to me that the university would one day encourage me to replace myself—my ideas, my words, my creativity—with AI systems that have consumed more text than all the professors in the world combined, yet seemingly lack the ability to comprehend any of it like an intelligent human being would. I suspect that my university is not alone in promoting this mind-numbing surrender among thinkers in our society. This is not only a shameful development but also an extremely frightening one.
Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.