search

LEMON BLOG

Why Teachers Should Talk to Students Before Accusing Them of Using AI to Cheat

When schools first encountered AI tools like ChatGPT and Gemini producing well-written essays and reports, one concern quickly took center stage: cheatingSome educators reacted by returning to traditional methods—requiring students to complete assignments with pen and paper. But is that really the best way forward?

Michael Rubin, principal of Uxbridge High School in Massachusetts, doesn't think so. He believes that students need to learn how to navigate AI responsibly, rather than avoiding it altogether.

"You might be given a car that can go 150 miles an hour, but that doesn't mean you should drive that fast. It's not about the risk of getting caught; it's about knowing how to use the technology appropriately,"

Rubin explained during an Education Week K-12 Essentials Forum on AI in schools.

AI as a Learning Tool, Not a Shortcut

AI shouldn't be used to replace student work, but it can serve as a helpful assistant, acting as a brainstorming partner or tutor—especially for students who don't have access to other support. Rubin shared a personal example: His daughter recently asked him for help with a history assignment.

"She has me to go to, and some kids don't. AI chatbots can sometimes be the great equalizer when it comes to academic support,"

However, he emphasized that he didn't do the work for his daughter—and AI shouldn't do the work for students either.

A Better Approach to AI Detection

Uxbridge High School uses a tool that helps teachers understand how students created their assignments. It tracks document history, making it possible to see if a student copied and pasted large sections—potentially from an AI-generated source. If a teacher suspects a student has misused AI, they don't immediately resort to punishment. Instead, they have a conversation about the proper use of AI, sometimes allowing students to redo the assignment.

"It's not just about giving a zero and moving on,"

Rubin explained.

The Problem With AI Detection Tools

These discussions are especially important because AI plagiarism detection tools are often inaccurate, according to Amelia Vance, president of the Public Interest Privacy Center. Speaking at the same forum, Vance pointed out that AI detection tools have been shown to wrongly flag work, particularly for students of color and those whose first language is not English. Even tools like the one used at Uxbridge—while more effective than AI detectors that simply scan for AI-like writing—aren't foolproof.

"At this point, there isn't an AI tool that can accurately detect when writing is crafted by generative AI," Vance said. "We know that several companies claim to do this, but it doesn't work."

Instead of assuming a detection tool is always right, educators should focus on understanding how students interact with their assignments.

Why Talking to Students Matters

Rubin's approach—discussing AI usage with students rather than immediately punishing them—is key to creating a fair and informed learning environment. If a student admits to using AI inappropriately, teachers should make it clear that it's not acceptable. But relying solely on AI detection tools for proof isn't the answer.

"Avoid ever assuming the machine is right,"

Vance advised.

By fostering conversations about ethical AI use, educators can guide students toward responsible and beneficial ways to incorporate AI into their learning—without letting it do the work for them.

How AI is Helping Protect Users from Call and Text...
Google Introduces Data Science Agent in Colab to S...

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Guest
Friday, 04 July 2025

Captcha Image

QUICK ACCESS

 LEMON Blog Articles

 LEMON Services

LEMON Web-Games

LEMON Web-Apps