AI has been at the center of national conversation since the launch of ChatGPT in 2023. The promise of large language models (LLMs) like ChatGPT became clear immediately. It provided an almost frictionless path to increased productivity and innovation. Everyone from tech savvy coders to my grandparents utilized it ravenously.
But LLMs are not without serious faults. At their heart, there appears to be a fundamental problem. Their “knowledge” is only as good as their human-generated input. Or, more precisely, given that the AI corporations have cannibalized the internet as we know it, the large language models are only as good as the internet that raised them. This is not unlike a child, who learns and absorbs lessons and values from the adults around them. LLMs are “learning” from the most craven, misinformed, inhumane “parent” on earth. Moreover, LLMs regurgitate and process the same information until it is no longer factual or relevant. The threat of corporations like Palantir promising to scrub the internet for information on people of interest to the highest bidder, along with AI’s blatant failings to produce accurate answers to prompts, will all affect how we interact with the digital world. But the most relevant issue for Batsies and students at all levels of education is an over-reliance on these products. Despite the truly terrible ramifications of grades and our modern systems for putting value on students (which is an issue that deserves more attention), AI only offers a brief replacement at the cost of real engagement.
Students, faculty, administrators, and teachers across the country and world are all struggling to find ways they can continue to educate their students without the threat of AI cheating looming over their heads.
There is no uniform response to generative AI in schools like this one, and nor should there be. The goal of a liberal arts education is to allow experimentation and discovery in the context of pursuing truth. But the path to truth is often long and treacherous, and AI seems to offer a shortcut. So, how should AI be used by Bates students?
I asked other first-year students how they use AI, and I got a variety of answers.
Peter Morris ‘29 told me he used AI to help him prepare for tests. “I think that it’s pretty good for practice problems.”
Jacob Sutherland ‘29 told me he has used AI in the past to summarize long readings in a pinch: “I used it a ton to summarize readings. Like, I’d get 20-40 page readings… Throw it in ChatGPT, read the summary.”
But more interestingly, I was met with a general disdain for the subject. When I asked my friends at a commons table, they responded with disappointment, and one said to me that AI has no place in higher education. Another said that it stifles creativity and should never be mixed with art or literature.
The Bates mission statement says “Bates educates the whole person through creative and rigorous scholarship.”
Is the use of LLMs creative and rigorous? I would argue no. So does that mean AI is incompatible with Bates College?
The easy answer is, yes. Unless specifically instructed, we do not use AI at all. But that feels too simple and unrealistic.
Cornell University lays out questions for students to use when consulting LLMs. These questions address the harms and benefits of AI use, and engage the student with their own thinking about the work the LLMs have done for them. This, I think, is the best model. Instead of me, some random freshman, telling you how to use AI. The school asks you questions. Questions that produce creative and rigorous thought and answers.
Who should write these questions? We should. A panel of students, faculty, staff, and IT specialists should write us a list of questions to examine our AI use. Now is the time to act as a wider community. Let’s set ourselves and future Bates students up for success in an increasingly AI driven world.
