A strange phenomenon is occurring in our classroom. Artificial intelligence (AI) is included in the syllabus. Still, conversations about AI in the classroom often go something like this: Please do not use. At least try not to use it. If so, please cite it. ”
There seems to be quite a bit of debate (I think well-founded) on this topic. But if the rapid development of AI and its rapid penetration into many aspects of our lives suggests anything, it’s that we need to rethink these conversations. about it.
Full disclosure: I’m a skeptic and a bit of a dinosaur when it comes to new technology. We are sympathetic and, frankly, not surprised that we are primarily concerned with the pitfalls of AI in higher education.
There are concerns that AI will hinder the skills higher education is designed to develop. And given that most students using AI ask her ChatGPT to write essays, answer quiz questions, or create discussion posts, this is almost certainly the case.
But when we admonish students in this way, we are actually just persuading them not to cheat, because most other forms of cheating will hinder them as well. And, realistically speaking, I’m not sure our energy is best spent trying to convince students not to cheat.
Another concern is that the AI isn’t very good. They cite false sources, “hallucinate” information, and are not particularly insightful by human standards. On a good day, you may see things you couldn’t see before, but you won’t come up with original ideas.
But like any tool or technology, AI can be good or bad depending on what you want it to do. We need to use AI skillfully, thoughtfully, and ethically. And in the classroom, it starts with honest and thorough conversations about its educational potential.
For example, large-scale language models (LLMs) can be readers that students do not always have access to while writing. You can provide feedback with varying levels of specificity and point out potentially confusing places in the text. This is an important consideration that students often overlook.
And really, that’s just scratching the surface. We now know that AI can be used to generate practice tests, teach English to foreign language speakers, and assist with reading comprehension. And I predict that further developments will definitely occur – it’s only a matter of time.
Still, I can imagine some people saying there are dangers in relying too much on AI cognitively. But if we’re all at risk of becoming idiots because of AI, it’s because we avoid it like the plague, and ultimately It will be too late to develop the habit of making good use of it.
How can we incorporate AI into our tasks? How can it help us teach? How can it help us learn? Human intellectual strengths, or weaknesses? What does it tell us about AI? In my experience, these questions are not often asked in classroom discussions about AI.
But I’m ready to ask them. You should too.
Riley Martinez is a copy editor at Beacon. You can contact him at: martinri24@up.edu.
Is there anything you want to say about this? We are committed to exposing different perspectives and would love to hear from you. Have your say at The Beacon.
