When AI Says Go Ahead …

October 06, 2025

In FEATURES

A significant concern with AI in education is its potential to compromise academic integrity. But the solution isn’t to ban AI. It’s to ensure educators have a voice in how it’s designed for classroom use.

Here’s what happened.

In doing their due diligence, following up on a potential academic integrity issue, a postsecondary faculty member and an administrator set out to replicate the suspected use of AI. Earlier that week, a student had submitted a Java midterm exam that raised suspicion. The assignment had explicitly stated that the use of AI or any external tools to create the code was strictly forbidden. Rather than speculating, they went straight to the suspected source.

The instructor and administrator told ChatGPT that they were working on an exam and asked ChatGPT to create a simple, entry-level Java program that would address the topics covered in class. ChatGPT provided a clean, fully functional solution without hesitation. From there, the users asked the AI to make the code less perfect so that it would not raise suspicion.

ChatGPT complied, offering a version with minor inefficiencies and beginner-style logic, calling it “B-grade” code. This version was intentionally designed to avoid optimization, making it appear less like an AI-generated product.

Then they escalated the request further by pasting in the actual instructions for a Java midterm exam, which included an explicit statement that AI or any external help was strictly forbidden.

Despite this clear warning, ChatGPT proceeded to write complete Java code, fulfilling all the requirements of the assignment. It adjusted the output formatting and even added common beginner errors. In that moment, we realized that AI was not acting like a neutral tool. It was acting like an accomplice or even as an enabler.

A call to action

This experience revealed a significant gap, not just in how students utilize AI, but also in how AI responds to them. AI is not increasing the frequency of cheating (Spector, 2023), but rather serves as another tool for students who perceive it as a shortcut to learning material and completing assignments. Educators must manage AI ethically and responsibly using a tool that, as it stands, lacks a moral compass. This article is not a call to ban AI. It is a call to refine it.

  • Teacher Mode: AI should detect academic environments and pivot into a supportive, instructional stance. As ChatGPT suggested, this includes questions like, “Can you explain what you have tried so far?” and “Would you like to go through this step by step, so you understand the solution?”
  • Policy-Aware Filters: If an assignment includes phrases like “AI use is strictly prohibited,” the system should immediately respond with, “This assignment explicitly prohibits AI use. I can help you review concepts but cannot provide a direct solution.”
  • Discipline-Specific Scaffolding: AI should not treat a math equation, a Java assignment, and a history essay the same way. For coding tasks, it could prompt students to explain the logic. For essays, it could guide students through outlining and revision.

These solutions, if implemented, could transform AI from an unintentional accomplice into a responsible co-educator. The answer lies in raising the barrier to cheating, not in raising the alarm. Simply rephrasing the task or adding minor friction points, such as asking clarifying questions, could deter a would-be cheater.

Reclaiming the narrative

CTE educators have always stood at the intersection of hands-on learning and real-world application. Our field is no stranger to disruptive technologies, but we have never ceded our responsibility to guide, contextualize and teach. AI is not going away. However, if we do not take ownership of its role in our classrooms, it will be defined for us by students, by technologists, or worse, by its algorithmic eagerness to please. We propose that OpenAI and other developers collaborate with educators to co-design systems that uphold academic integrity while still letting students benefit from AI as a learning tool. This enhancement is a market opportunity, yes, but more importantly, it is a moral one.


Brandon Hensley, Ph.D., is the dean of CTE at McDowell Technical Community College.

Michelle E. Bartlett, Ph.D., is an assistant professor in community college leadership at Old Dominion University.

Bill Teale is a web designer and a member of the web technologies faculty at McDowell Technical Community College.

Jay Perry, M.F.A., is faculty senate president and a member of the graphic design faculty at McDowell Technical Community College.

Read more in Techniques.

# # # # # #