Products

Solutions

Resources

Products

Solutions

Resources

Products

Solutions

Resources

A teacher’s guide to addressing AI cheating with the TRUST framework

A teacher’s guide to addressing AI cheating with the TRUST framework

A teacher’s guide to addressing AI cheating with the TRUST framework

A teacher’s guide to addressing AI cheating with the TRUST framework

A teacher’s guide to addressing AI cheating with the TRUST framework

Stop AI cheating with the TRUST framework. Practical guide for teachers to create authentic assignments and ethical AI use in the classroom.

Stop AI cheating with the TRUST framework. Practical guide for teachers to create authentic assignments and ethical AI use in the classroom.

Stop AI cheating with the TRUST framework. Practical guide for teachers to create authentic assignments and ethical AI use in the classroom.

Colton Taylor

Jul 10, 2025

AI cheating incidents in universities have jumped from 1.6 to 7.5 per 1,000 students in just two years. You've probably noticed the signs: suspiciously polished assignments, discussion posts with eerily similar phrasing, and confused questions from students about what constitutes acceptable AI use. This isn't just about students using ChatGPT for homework research anymore. It’s a fundamental shift in how academic work gets done.

The reality is that students are already using AI tools daily, and blanket bans often drive that exploration underground while creating equity gaps. The solution is to teach students how to use AI as a learning partner rather than a shortcut, while designing learning experiences that naturally discourage academic dishonesty. And the TRUST framework is designed to do exactly that. 

The TRUST framework: Your roadmap to ethical AI use in the classroom

The TRUST framework provides five interconnected pillars that work together to prevent AI cheating while enhancing the learning environment. The acronym stands for:

  • Transparency

  • Real-world tasks 

  • Universal design

  • Social construction 

  • Trial and error

The TRUST framework succeeds because it addresses the root causes of AI cheating rather than just symptoms:

  • Unclear expectations → Transparency: Research consistently shows that academic misconduct decreases when expectations are clear, collaboratively developed, and consistently communicated.

  • Disconnected assignments → Real-world tasks: Authentic assessment reduces cheating by creating personal investment and requiring contextual knowledge that AI cannot provide.

  • Limited learning pathways → Universal design: When students can succeed through their strengths, they're less likely to seek shortcuts through their weaknesses.

  • Isolated work → Social construction: Peer accountability and collaborative learning create natural deterrents to academic dishonesty while building critical thinking skills.

  • Perfectionism pressure → Trial and error: Process-focused assessment reduces the high-stakes pressure that drives students toward guaranteed AI solutions.

Transparency: Clear expectations prevent shortcuts

The problem: When students don't know where the line falls between acceptable and unacceptable AI use, they create their own rules, often leading to academic dishonesty. This ambiguity contributes directly to the increase in AI-assisted misconduct you're seeing in classrooms.

The solution: Create crystal-clear policies and open communication about AI use that eliminates guesswork and creates shared understanding.

Quick implementation:

  • Add an "AI-Use Disclosure" line to assignments: "If you consult an AI tool, list the tool and describe how it helped you in one sentence." This simple addition normalizes honest disclosure and shows students you value transparency over secret shortcuts.

  • Include a concise AI policy in your syllabus that takes just ten minutes to write but creates a consistent reference point all term: "You may consult generative AI tools for brainstorming, outlining, and grammar feedback. Any text that is copy-pasted from an AI tool, even if edited, must be cited in an appendix. Using AI to generate entire answers or to bypass the learning process is prohibited and will be treated as academic dishonesty."

  • Co-create classroom norms with your students through a five-minute think-pair-share: "Where do you think AI can help you learn, and where might it hurt your growth?" Collect responses, add the best ideas to your policy, and have students sign off. This collaboration increases buy-in and mirrors university honor codes.

Advanced transparency strategies:

  • Use Google Docs or Microsoft 365 version history to track writing progress, and tell students that you review these logs. A thirty-second scan can reveal sudden blocks of polished text that often signal AI insertion.

  • Implement regular check-ins where students explain their writing process, particularly for assignments that show dramatic improvements in quality or style.

  • Create anonymous feedback loops where students can ask questions about AI use without fear of judgment.

Why it works: Transparency eliminates the gray areas that lead to academic dishonesty. When students understand exactly what's expected and feel comfortable asking questions, they're far more likely to make ethical AI usage choices. Clear expectations, especially those developed collaboratively, significantly reduce academic misconduct.

Real-world tasks: Authentic learning reduces cheating appeal

The problem: Generic assignments invite generic AI responses. When students can't see the connection between their work and real-world applications, they're more likely to view assignments as obstacles to overcome rather than learning opportunities to embrace.

The solution: Design authentic assignments that require personal engagement, critical thinking, and real-world application where AI shortcuts fall short.

Quick implementation:

  • Transform traditional essays into community-based projects that involve real stakeholders. Instead of "write about poverty," have students interview local social workers and analyze specific community challenges.

  • Create assignments that require students to conduct fieldwork, interview people, or solve local problems that demand personal insight and local knowledge.

  • Use project templates that organize community-based work efficiently, making complex authentic assignments manageable for busy teachers.

Advanced real-world strategies:

  • Partner with local organizations to create assignments where student work contributes to actual community needs.

  • Design reflection components that require students to connect their personal experiences with academic concepts.

  • Implement micro-rubrics that focus on unique insights and personal connections rather than generic analysis.

Why it works: Real-world tasks demand students' own voices, perspectives, and experiences. When students see meaningful connections to their lives and communities, they invest more authentically in the work. These assignments naturally produce unique outputs that generic AI responses can't match, making academic dishonesty both more difficult and less appealing.

Universal design: Multiple pathways increase engagement

The problem: One-size-fits-all assignments can push struggling students toward AI shortcuts when they feel unable to succeed through traditional methods. Students with different learning styles, language backgrounds, or accessibility needs may turn to AI not to cheat, but because they can't demonstrate their knowledge through limited formats.

The solution: Offer multiple ways for students to demonstrate their learning while maintaining consistent academic rigor.

Quick implementation:

  • Transform written reflections into choice assignments all addressing the same learning objectives: students can submit an audio diary, create a visual timeline, build an annotated presentation, or write a traditional essay.

  • Develop one master rubric focusing on transferable skills like analysis, accuracy, and clarity, regardless of format. This approach lets you score diverse products quickly while maintaining consistent expectations.

  • Create labeled submission folders or assignments in your LMS (Video, Audio, Visual, Written) so that different file types organize automatically.

Advanced universal design strategies:

  • Provide the same guiding questions for all format options so cognitive demand stays consistent across choices.

  • Connect tasks to real audiences when possible for students to create content for community members, younger students, or professional networks.

  • Use platforms that can collect diverse media types in one dashboard with time-stamped feedback capabilities.

Why it works: When students can showcase their strengths and learning preferences, they're more invested in authentic work. The personal connection and sense of competence reduce the appeal of generic AI responses. Additionally, diverse formats make it easier to spot AI-generated content, as students' individual voices and perspectives become more apparent across different media.

Social construction: Peer learning creates accountability

The problem: Isolated, take-home assignments make it easier for students to use AI undetected. When work happens in isolation, there's less natural accountability and fewer opportunities for authentic intellectual exchange.

The solution: Build community learning experiences with built-in peer accountability that make authentic engagement more rewarding than shortcuts.

Quick implementation:

  • Implement regular peer feedback rounds with a tight structure: 10 minutes to exchange drafts, 5 minutes for focused feedback using guiding questions ("What's clear?" "What needs evidence?"), and 2 minutes for writers to note revision goals.

  • Rotate clear roles (writer, reviewer, recorder) each session to ensure participation, finishing with one-minute exit tickets where everyone notes the most helpful comment they gave or received.

  • Grade both the product and the peer review process, making meaningful participation part of the assignment rather than an optional extra.

Advanced social construction strategies:

  • Create reusable templates in your LMS discussion board with prompts, role charts, timing cues, and rubric links for constructive feedback.

  • Implement workshop days where students swap drafts and give targeted feedback using simple rubrics that focus on revision quality and implementation of peer suggestions.

  • Use social accountability strategically. When students see each other's work unfold over time, unusual text patterns or sudden quality jumps become more visible.

Why it works: Social learning creates natural accountability while building critical thinking skills. When students know their peers will see their work process, they're more likely to engage authentically. Collaborative environments also make AI-generated content more obvious, as it often lacks the personal voice and contextual understanding that emerges from peer interaction.

Trial and error: Process focus reduces pressure for perfect products

The problem: High-stakes assignments create pressure that drives students toward AI shortcuts. When everything depends on a single perfect final product, the temptation to use AI for guaranteed quality becomes overwhelming.

The solution: Build iteration, revision, and learning from mistakes into your assessment design, celebrating growth over perfection.

Quick implementation:

  • Break large assignments into visible milestones: outline → first draft → revision → final product, each with brief "process notes" explaining changes and growth.

  • Require low-stakes practice opportunities before high-stakes assessments—pair rough drafts with auto-graded quizzes that provide immediate feedback on understanding.

  • Dedicate rubric points to visible growth: quality of self-reflection, evidence of incorporated feedback, or clarity in explaining revision choices.

Advanced trial and error strategies:

  • Capture quick evidence of authentic thinking through two-minute screen-recorded think-alouds or snapshots of handwritten brainstorming.

  • Use streamlined workflows with staggered deadlines (proposal → draft → final) and LMS settings that require submission before viewing feedback.

  • Hold brief conferences for any submission that changes dramatically between drafts, using these conversations as learning opportunities rather than interrogations.

Why it works: When the learning process matters as much as the final product, AI shortcuts lose their appeal. Students focus on growth, revision, and genuine understanding rather than just getting the right answer. This approach also provides multiple checkpoints where authentic learning becomes visible, making AI-generated content easier to identify while supporting legitimate student development.

The complete TRUST implementation timeline

For systematic change, implement all five components over a semester:

  • Week 1-2: Establish transparency with clear AI policies, student input on classroom norms, and consistent disclosure expectations.

  • Week 3-4: Redesign one major assignment to include real-world connections, community partnerships, or authentic problem-solving components.

  • Week 5-6: Introduce choice in how students demonstrate learning, providing multiple formats while maintaining consistent academic rigor.

  • Week 7-9: Build peer review and social construction into major projects, creating accountability through collaborative learning.

  • Week 10-12: Focus assessment on process and growth, celebrating revision, reflection, and learning from mistakes.

This timeline allows gradual implementation while building student buy-in and comfort with new expectations.

Using AI detection tools within TRUST

AI detectors can support the TRUST framework, but shouldn't replace human judgment or meaningful pedagogy. Even the best tools have significant limitations. For instance, OpenAI's now-retired detector only caught 26% of actual AI use while incorrectly flagging 9% of authentic student writing as AI-generated.

If you do use AI detectors, here’s how to make them work for you

  • Use detectors as a first filter, never as a final judgment.

  • Combine detection reports with version history analysis, student conferences about writing processes, and comparison with previous work samples.

  • Be transparent about which tools you use and their limitations. 

  • Keep detailed records when investigating potential AI use: risk scores, dates, key excerpts, version history evidence, and conversation summaries.

When talking with students about potential AI use, maintain an investigative rather than accusatory tone. Ask them to walk through their drafting process, open document history together, or explain how they verified information if they used AI tools. These conversations often reveal legitimate learning processes while providing opportunities to reinforce ethical AI use.

From prevention to partnership: Using classroom AI the right way

The goal of the TRUST framework isn't to prevent students from ever using AI. It's to transform AI from a cheating threat into a learning partner. When students understand expectations, engage with meaningful tasks, have multiple ways to succeed, learn from peers, and see mistakes as growth opportunities, they naturally choose authentic engagement over shortcuts.

This transformation requires the right tools and support systems. Platforms like SchoolAI provide invaluable resources for guiding students toward ethical AI use, offering features like transparent usage tracking, structured AI literacy lessons, and collaborative spaces that support authentic learning. With SchoolAI's Mission Control, you can review student AI interactions, model positive AI use, and maintain the visibility that makes the TRUST framework effective. Sign up today to discover how SchoolAI can help you build a classroom where students learn to use AI ethically while developing the critical thinking skills that will serve them throughout their lives.

Key takeaways

  • The TRUST framework (Transparency, Real-World tasks, Universal Design, Social construction, Trial & Error) systematically addresses root causes of AI cheating while enhancing learning experiences.

  • Clear expectations and collaborative policy development prevent the ambiguity that leads to academic dishonesty.

  • Authentic, personally meaningful assignments naturally discourage AI shortcuts by requiring unique perspectives and contextual knowledge.

  • Process-focused assessment reduces pressure for perfect AI-generated products while celebrating genuine learning and growth.

  • AI detection tools should supplement, not replace, thoughtful pedagogy and human judgment.

Related posts

Related posts

Related posts

Student Success & Engagement

Student Success & Engagement

Student Success & Engagement

See More

Human-centered AI in education: Why thoughtful implementation is key

Human-centered AI in education: Why thoughtful implementation is key

Human-centered AI in education: Why thoughtful implementation is key

Discover how real-time data collection creates effective feedback loops in education, helping teachers identify learning gaps instantly and personalize instruction for better student outcomes.

Learn how AI transforms classroom presentations into personalized, interactive learning experiences that adapt to students' needs and boost engagement in real-time.

Learn how AI transforms classroom presentations into personalized, interactive learning experiences that adapt to students' needs and boost engagement in real-time.

Read more

Student Safety

Student Safety

Student Safety

See More

Why banning AI in schools misses the point, and what to do instead 

Why banning AI in schools misses the point, and what to do instead 

Why banning AI in schools misses the point, and what to do instead 

Discover how real-time data collection creates effective feedback loops in education, helping teachers identify learning gaps instantly and personalize instruction for better student outcomes.

Learn how AI transforms classroom presentations into personalized, interactive learning experiences that adapt to students' needs and boost engagement in real-time.

Learn how AI transforms classroom presentations into personalized, interactive learning experiences that adapt to students' needs and boost engagement in real-time.

Read more

Higher Education

Higher Education

Higher Education

See More

Positioning AI literacy as core digital literacy in the modern classroom

Positioning AI literacy as core digital literacy in the modern classroom

Positioning AI literacy as core digital literacy in the modern classroom

Discover how real-time data collection creates effective feedback loops in education, helping teachers identify learning gaps instantly and personalize instruction for better student outcomes.

Learn how AI transforms classroom presentations into personalized, interactive learning experiences that adapt to students' needs and boost engagement in real-time.

Learn how AI transforms classroom presentations into personalized, interactive learning experiences that adapt to students' needs and boost engagement in real-time.

Read more

Transform your teaching with AI-powered tools for personalized learning

Transform your teaching with AI-powered tools for personalized learning

Always free for teachers.