Watching ChatGPT in action is mesmerizing: the cursor moves forward without pause or backspace, producing essays that would get you a good grade in many college classes.
The debate around the application of generative AI in higher education has understandably focused on addressing the risks that students will use it to cheat: generative AI makes it all too easy to take shortcuts that undermine learning.
This new concern about academic integrity is a welcome development. But the prevailing definition of academic integrity as a bulwark against classroom chaos is too limited. Academic integrity policies are too often built around “what not to do,” such as cheating, fabricating, plagiarizing, or collaborating without permission. This approach fixates on a set of rules rather than instruction that teaches students to approach assignments honestly and sincerely.
Discussions about AI deserve a deeper and more critical understanding of academic integrity. The impulse to portray academic integrity as something that must be defended gives too much power to the trope of dishonesty. Academic integrity is not about closing the door to unwanted intruders and protecting what’s inside. Instead, especially in the era of generative AI, academic integrity must be seen as something we consistently aspire to. Acting with academic integrity means asking questions, trusting your instincts, and taking risks. It takes courage.
Academic integrity, as defined by the International Center for Academic Integrity, is a commitment to six core values – honesty, trust, fairness, respect, responsibility, and courage — that enable the academic community to put ideals into action. In the classroom, academic integrity often revolves around attribution, getting students to do their own work and not claiming someone else’s work as their own.
Long before ChatGPT, many faculty had put up guardrails on academic honesty to prevent students from plagiarizing others’ ideas and punishing them if they crossed the line. Today, faculty deploy AI tools to sniff out AI-written texts. But the technology has proven unreliable and biased. The measures put in place to protect academic honesty have created an adversarial relationship between faculty and students. We see nothing wrong with faculty not wanting students to claim others’ ideas as their own. The challenge at hand, then, is to create a classroom culture that encourages academic honesty over dishonesty.
College campuses should be places where ideas are generated and exchanged for the common good. Integrity reflects the character of teachers and students and the impact knowledge has on real lives. Academic integrity shapes how we live this goal within our community. Using academic honesty as a teaching opportunity can support rather than punish students and foster collaborative rather than adversarial relationships between teachers and students.
Academic integrity in this new AI era starts with defining critical AI literacy. Instructors need to help students understand what generative AI is and how, why, and when (or when not) to use it. Students need to be skeptical and understand the potential harms associated with AI. In the tradition of critical pedagogy (teaching methods that encourage students to think critically and question the information they receive), students need to learn how issues of democracy, social justice, and power are intertwined with the use and deployment of AI. Instead of interrogating students’ use of AI, let’s interrogate the AI and its creators.
These new forms of technological literacy should reflect the six values of academic integrity. These values are not neutral, so faculty should also be sensitive to the racial, ethnic, and community values that students bring to class. Faculty should identify these values at the beginning of each semester and use them as a rubric for coursework. If students want to use AI to write their papers, they should be honest about it and hold them accountable according to established classroom values. As bell hooks suggests in her book: Inform the violationProactive pedagogy, where decisions are based on underlying values and objectives rather than a rigid set of rules, can be used to guide uncomfortable conversations with students about inappropriate uses of generative AI.
Instructors need to create classroom environments that encourage critical inquiry. Students may violate norms of academic honesty because they don’t have time to complete an assignment, they don’t fully understand the material yet, or they fear failure. It’s okay to fail. Success often comes from learning from failure. Instructors need to create authentic, meaningful assignments that incorporate failure and flexibility.
When classroom activities are conducted with generative AI tools, students must provide evidence of their work and give appropriate credit to the owners of the work. Engaging honestly and fairly with others and applying rules and policies consistently demonstrates fairness. Standing up for your beliefs, taking risks, and perhaps being willing to fail demonstrates courage.
Whether students use generative AI or not, the ultimate goal is the same: faculty should prepare scholars who generate new knowledge. The purpose of academia is for scholars to generate new ideas and let those ideas shine with the tools available to them. Faculty who actively embrace the value of academic integrity can deploy AI in human-centered, collaborative ways, opening the door to new knowledge, new scholarship, and new ways of solving problems in honest, truthful, and fair ways.
Antonio Byrd is an assistant professor of English at the University of Missouri-Kansas City. Sean Michael Morris is vice president of academic affairs at the online learning company Course Hero.
