Faculty Members React to ChatGPT and AI Software


Credit: Katherine Merisotis ’23

On Nov. 30, 2022, OpenAI announced their new product ChatGPT which sparked debate among students, educators, parents and the New York Times about the future of education. ChatGPT is a novel chatbot trained to interact with users and respond to their questions. 

According to its developers, ChatGPT is trained to request more information when unclear, admit shortcomings in answers, correct biased premises and reject inappropriate requests. The tool is incredibly versatile and can be used to generate poetry, song lyrics, short essays and summarize information, among many others. In an academic context, some colleges are worried about its potential to be used to write essays and papers.  

However, the chatbot has a number of shortcomings. According to the developers, ChatGPT sometimes writes misleading or incorrect responses, it may overuse phrases or be excessively wordy and it may answer ambiguous questions incorrectly. One of the most discussed issues is the potential for the chatbot to produce biased or harmful content. According to Bloomberg News, this is because AI, like ChatGPT, relies on statistical association among words and phrases, which can result in the reproduction of biased ideas. To combat this issue, the developers are using Moderation API to restrict certain requests of the program. 

According to their mission, OpenAI is an artificial intelligence research and development company aiming to “ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” Previously, they have put out products like DALL-E2, an AI that can create images and art from descriptions by users, and Codex, a system that converts written word into code. 

This technological development has not been missed by Bates faculty. The faculty met on Jan. 23 for a preliminary meeting to hear concerns and questions from professors about ChatGPT. Associate Dean of Faculty Donald Dearborn said that faculty are just beginning to “grapple individually with what this means for how they teach and how they do their work.”

According to Dearborn, faculty reaction is mixed. While there is concern about academic integrity, many are optimistic about its potential for use in the classroom. 

Professor of Psychology Todd Kahan has found ChatGPT useful as a preliminary tool to draft javascript code. “The code it generated needed some editing, but it helped me get started and if students encounter similar roadblocks in my lab-based classes, I wouldn’t have problems if they wanted to use this technology to generate computer code,” Kahan said.

That said, he’s found that “while [ChatGPT] can generate grammatically correct prose, the content is often repetitive and uninteresting. It’s possible that this aspect of ChatGPT will improve over time, but this will only make it more important to critically evaluate the content generated, as it may not always be accurate”

This finding was echoed by Professor of Philosophy Susan Stark, whose personal experiments with ChatGPT have returned basic discussions of topics that are oversimplified and contain “very little meaningful analysis.” 

“Like a computer or a calculator, ChatGPT can be used as a tool to overcome writer’s block, to help brainstorm ideas or approaches to a project,” Stark said. “If it is used, it might be used early in a project, where the student then takes up the bot’s suggestions and decides what to do with them in their own writing, speaking, and thinking.”

Dean Dearborn doesn’t expect that Bates will be “prescriptive institutionally” about the use of ChatGPT in the classroom. Instead, he anticipates a wide range of approaches. 

“At one end, there’s prohibiting the use of it and developing or using tools to detect the use of it like any other plagiarism or academic integrity issue,” Dearborn said. “At the other end [there is] developing instructive ways to use ChatGPT as a learning tool.” 

Regardless of approach, Stark believes substituting AI will not give students the benefits gained while learning to write. 

“An important part of both high school education, and a Bates education, is to promote students learning to read empathetically and critically, write and speak clearly, persuasively, and effectively, and think well,” Stark said. “A chat bot cannot think for anyone else, it is not even clear that the bots can think for themselves. So I believe that education will always be needed so that students can read, write, speak, and think well.”

According to Dearborn, it will be important to convey these benefits, along with other learning outcomes, to students. 

“The Center for Inclusive Teaching and Learning is a great resource for thinking about how and why assignments are structured in particular ways, and what student outcomes are best fostered by different approaches,” he said.

The concern about academic integrity has held significant space in conversations about the use of ChatGPT in higher education. In early January a senior at Princeton University created GPT Zero, a tool that can be used to identify when a text is written by ChatGPT. The tool bases its analysis on the “perplexity” and “burstiness” of the text. Essentially, if the text has multiple variations of sentence length and complexity, or GPT Zero is “perplexed” by the text, it is unlikely to be AI-generated. 

Even with all of the limitations of ChatGPT, Kahan concedes that there may be take-home assignments where it could be used. “But, I create these assignments to help students learn the course material so using ChatGPT on low-stakes assignments might later hinder performance on in-class quizzes and exams,” he said.

According to Dearborn, with any cheating tool, talking with students about what is right and wrong is essential to prevent the inclination to cheat in the first place. 

“We’re in the business of educating a whole person,” he said. “And, for students, for young people, that includes supporting them and as they continue to grow about who they are as people, and that includes their sense of moral compass.”


The following essay was generated using the artificial intelligence word processor ChatGPT. This essay was not edited for clarity or length. To generate this essay I asked the chat bot “how can college professors best combat the use of ChatGPT for cheating? 300 words.” This was the output: 

College professors can take a number of steps to combat the use of ChatGPT for cheating in their classrooms. One of the most effective ways to do this is by using plagiarism detection software. These tools can scan student work for signs of plagiarism, including text that has been generated by ChatGPT. By regularly using plagiarism detection software, professors can quickly identify and address instances of cheating.

Another way to combat the use of ChatGPT for cheating is by implementing more rigorous and varied assessment methods. For example, instead of relying solely on written essays, professors can incorporate other types of assessments, such as oral presentations, group projects, or problem-solving exercises. These types of assessments can be more difficult to cheat on and can provide a more comprehensive understanding of a student’s knowledge and abilities.

Additionally, professors can also encourage critical thinking and originality by providing clear instructions, guidelines and rubrics for the assignments and exams. Having clear instructions and guidelines can make it harder for students to cheat, as they will have a clear understanding of what is expected of them. Professors can also require students to include citations and references in their work, which can help to identify plagiarism.

Another approach professors can use is to establish a culture of academic integrity within their classrooms. This can involve setting clear expectations and consequences for cheating, encouraging open communication between students and professors, and promoting a sense of community and responsibility among students.

In addition to these steps, professors can also stay informed about the latest technologies and techniques used for cheating, such as ChatGPT and other AI-powered tools. By staying informed and aware of these technologies, professors can be better equipped to detect and prevent cheating in their classrooms.

Finally, professors can work with their institutions to develop policies and procedures for addressing cheating, including the use of ChatGPT. This can include working with academic integrity officers, technology support staff, and other experts to develop strategies for detecting and preventing cheating.

In conclusion, preventing cheating in college classrooms is essential for maintaining academic integrity. Professors can take a variety of steps to combat the use of ChatGPT for cheating, including using plagiarism detection software, implementing varied assessment methods, promoting critical thinking, fostering a culture of academic integrity, staying informed about cheating technologies, and working with their institutions to develop policies and procedures for addressing cheating.