Transparent, teacher-controlled AI
You set the rules, see every interaction, and stay in control. Our AI explains its decisions, follows your instructions, and evolves based on educator feedback.
Understanding how our AI makes decisions
SchoolAI believes in transparency when it comes to AI decision-making. Every response generated by our platform includes multiple inspectable guardrails, and we prompt the AI to provide detailed explanations of its reasoning process.
When teachers or administrators want to dig deeper, we can provide detailed information about the prompting and context that led to any specific response. Additionally, every piece of feedback from teachers and students is reviewed and used to improve our prompts, with updates happening at least 3 times per week.
Teachers stay in the driver's seat
Teachers stay firmly in control with SchoolAI. When creating Spaces, educators have complete control over AI instructions, activity procedures, student progression conditions, and the types of insights they want to surface. Our prompting consistently reinforces the need to follow teacher instructions. Nearly every processing node in our system specifically calls out this requirement.
Beyond basic control, teachers receive immediate alerts when concerning situations arise. Our critical alert system flags inappropriate content, sensitive topics, or other situations requiring teacher intervention with red alerts for immediate response. Mission Control also provides real-time insights, flagging students who aren't grasping concepts or need more of a challenge so teachers can step in exactly when needed.
Continuous improvement through school partnership
Our protocols aren't static, they evolve constantly based on real classroom needs. We update prompts weekly based on direct observations, identified issues, and statistical analysis of user feedback. Every time a teacher or student grades a response, that feedback becomes part of our improvement process.
Schools play a crucial role in this evolution. The feedback mechanisms built into our platform allow educators to directly influence how we improve our AI responses. Additionally, Space configuration itself serves as a protocol that teachers control entirely. Coming soon, school systems will even be able to set custom instructions to guide how SchoolAI handles specific situations unique to their organization.
Building fairness into every interaction
Creating an unbiased, equitable learning environment starts at the foundation. The major AI providers we work with employ large safety teams dedicated to reducing bias in their models. We build upon this foundation with additional layers of protection.
We evaluate every response through multiple lenses: academic rigor, individual student needs, and equity for diverse populations. Using established educational frameworks, our system ensures responses are educationally sound while remaining accessible and fair to all learners. We run adversarial evaluations on every response before it reaches students, actively working to eliminate bias and ensure fairness.
Respecting community values while protecting students
SchoolAI's AI is designed to be inclusive of all perspectives and viewpoints, allowing teachers and districts to guide outputs toward their community's expectations. However, we maintain two non-negotiable stances: the paramount importance of human life and maintaining appropriate AI-student interactions.
All our prompting is oriented toward preserving life and preventing harm. If a student expresses concerning thoughts, teachers are immediately alerted through our critical alert system. Similarly, we ensure AI remains an educational tutor, never becoming overly personal or inappropriate. When handling polarizing topics, our AI presents nuanced, objective viewpoints without taking partisan stances, while always erring on the side of kindness and inclusivity.
Clear AI identity and transparency
Students using SchoolAI always know they're interacting with AI. Our systems are prompted to be clear about what Dot is—an AI machine that serves as a helpful tutor. Even during role-playing activities (like historical simulations), we maintain strict conditions ensuring students understand they're interacting with AI playing a role, not a human or the actual historical figure.
Protecting student wellbeing
SchoolAI takes a clear stance against validating harmful self-talk or concerning thinking patterns. Our system is specifically prompted to respond with empathy while maintaining the importance of life and wellbeing. Any concerning self-talk or harmful patterns trigger immediate teacher alerts, ensuring appropriate adult intervention.
Fostering genuine learning and not giving answers
SchoolAI is built to be an educational tutor, not a search engine. Extensive prompting ensures the AI coaches students through learning processes rather than simply providing answers. Our team frequently and specifically tests whether the AI can be tricked into giving direct answers, and consistently finds that it refuses, instead guiding students step-by-step through their learning journey.
This commitment to genuine learning outcomes is fundamental to everything we build. We're focused on helping students truly understand and grow, not just complete assignments.
Building inclusive, culturally responsive learning
Creating inclusive content isn't just our goal, it's a commitment shared by all our AI partners. The major AI providers we work with employ thousands of talented people working daily on inclusivity and cultural responsiveness. We only partner with providers who demonstrate strong commitment to these values.
On top of these foundational protections, we add our own prompting and guardrails to ensure equitable responses. This is an iterative process, and we're constantly improving based on feedback and new insights from the education community.
Rigorous safety testing through
red-teaming
We actively conduct adversarial testing to find and fix potential vulnerabilities. When we discover situations where the AI might generate inappropriate responses, we immediately update our prompts to handle these scenarios appropriately. This includes ensuring the AI directs students to trusted adults for sensitive topics unless teachers have explicitly included those topics in their instructions.
We're currently hiring a dedicated red team to conduct comprehensive adversarial testing across all aspects of our platform—from information security to content safety. These tests will examine everything from prompt injection vulnerabilities to self-harm prevention, and we plan to publish our findings publicly, showing our testing methods, issues discovered, and fixes implemented.
Expert guidance shapes our approach
Educator feedback is continuously incorporated into our system through our comprehensive feedback loops. We regularly consult with legal counsel to ensure appropriate responses and maintain compliance with educational privacy laws including COPPA. This ongoing legal consultation helps ensure we're meeting the highest standards for student data protection and privacy.
Accountability and responsibility
When concerns arise about AI responses, we have clear ownership and accountability structures in place. Dedicated team members are notified of every response grading and oversee statistical analysis of all user feedback. Our prompt engineers are directly responsible for improving outputs based on both user feedback and internal simulations.
This isn't just about fixing problems, it's about continuous improvement. We maintain clear ownership paths for enhancing our AI's performance, ensuring that feedback leads to meaningful changes that benefit all users.

We’re committed to transparency
Trusted by over 450,000+ educators worldwide














