Dr. Rob Wessman, Vice President of Ethics, Safety, and Learning Innovation at SchoolAI

In April 2026, a coalition of 260 children's advocates and researchers called for a five-year pause on generative AI in PreK-12 schools. They named five tests any AI product should meet before it belongs in a classroom.
We disagree with the pause. We agree with the tests. Most AI products in classrooms today cannot meet them. Districts should not accept any that cannot. Here is how SchoolAI answers each.
1. Learning outcomes, not cognitive offloading
The deepest concern about AI in education is that it does the thinking for students. We share that concern, and we built against it. Dot, our AI learning assistant, is structured around the warm demander stance: it pushes students back to their own reasoning rather than producing finished answers.
We have two years of externally validated evidence that this works. In a study of 13,882 student-AI conversations across 82 teachers in Utah's Jordan School District (55,000 students, 68 schools), critical thinking rose 28% between October 2023 and October 2025. Conversations showing the highest levels of thinking, the analysis, evaluation, and creation levels of Bloom's Taxonomy, more than doubled. Gains held across every subject and grade level studied. The study received ESSA Tier 3 certification following external review by Instructure.
The most important finding for districts is what drove those gains. Teachers who actively designed learning experiences on the platform showed significant improvement; teachers with minimal usage did not. AI did not raise the cognitive bar by itself. Teachers raised it, with AI as the instrument. That is the model of AI in education we believe in.
2. Safer than the realistic alternative
No technology in a child's life is absolutely safe. Not paper books, not pencils, not classroom pets. The right question is comparative: does supervised, teacher-mediated classroom AI make students safer than the unsupervised consumer AI they already use at home? We believe the answer is yes, by a wide margin.
The chatbot harms that have rightly alarmed the public have happened with consumer products outside school oversight, where no adult sees the conversation. Inside SchoolAI, the architecture is the safeguard. Our Critical Alerts system flags concerning conversations to school staff in real time. A four-tier escalation process governs how incidents are handled, with named human responsibility at every level. Content is age-banded for K–7 and 8–12. Image generation is gated with universal loophole guards against educational reframing.
These are not features bolted onto a chatbot. They are the conditions under which we believe AI should ever interact with a child. Banning supervised, instrumented AI from schools does not make students safer. It pushes them toward the consumer AI in their pockets, which has none of the same protections, and where the harms keep happening.
3. Anti-cheating by design
Removing supervised AI from classrooms does not reduce student AI use. It moves it home. Unsupervised, unmediated, with no teacher in the loop. That is the worst of both worlds for academic integrity.
SchoolAI is structured against this from the design level up. Spaces give teachers full real-time visibility into every student conversation: who is working, what they are asking, where they are stuck, where they are shortcutting. Educators set the scope, the context, the rules, and the resources of every session. Dot is structured to scaffold thinking rather than to produce homework. It pushes students toward their own reasoning rather than handing them a finished answer.
The cheating concern in education is real, and it predates AI. What is new is that for the first time, schools can put student AI use under direct teacher observation rather than driving it underground. That is a feature of supervised classroom AI no consumer product offers, and no AI ban can deliver.
The path to academic integrity in the AI era is not less supervised AI. It is more of student AI use happening where a teacher can see it.
4. Privacy, equity, and the things we cannot bluff
The four areas in this test are also the easiest places in AI marketing to make claims that do not survive contact with reality. Privacy, civil rights, ethics, climate. We try to claim only what we can defend.
Privacy is native to who we are. Our compliance posture is independently verifiable at trust.schoolai.com, covering COPPA, FERPA, UK GDPR, Australian ST4S, South Korean PIPA, Canadian MFIPPA, and the EU AI Act's August 2026 transparency obligations. We work to use less student data, not more, and to be auditable about how we use what we do.
On equity, our IEP drafting tool reflects in product form a commitment to the students public education most often underserves: faster IDEA-compliant work for the educators serving neurodivergent students. Our work on bias and fair representation is an active program, not a completed one, to make sure surface-level stereotypes do not make it to instruction.
Climate is where we owe the most honesty. We are an application-layer company and do not train the underlying models, so the largest portion of AI's environmental cost is not within our direct control. Within our control are the design choices: pedagogy that scaffolds focused student work rather than rewarding the sprawling sessions consumer AI optimizes for. Less inference per learning outcome, pedagogy and efficiency pulling in the same direction. We will meet credible disclosure standards before they are required of us.
5. Built around teachers, by architecture
This is the test we hold most strongly, and the principle the product is built around. Most AI in education is consumer AI in a school skin: a chatbot students talk to. SchoolAI is the inverse. A teacher-facing platform with student-facing surfaces, where the educator orchestrates, monitors, and intervenes.
Spaces put teachers in real-time view of student work. Our tools do not produce student-facing content or replace educator judgment. Critical Alerts route concerning conversations to school staff so a human, not an algorithm, decides what happens next. The pedagogical reasoning behind Dot's responses is surfaced to teachers, not hidden, so educators can see the instructional logic and override it when their professional judgment says otherwise.
This architecture is the reason a SchoolAI classroom is a teacher's classroom. The platform does not run in parallel to instruction. It sits inside it. For the most vulnerable students, including neurodivergent students, at-risk students, and students of low socioeconomic status, a teacher in the loop is non-negotiable. That is how we built it. That is the only way we believe it should be built.
The AI children experience in schools should meet a higher standard than the AI in their pockets. They also deserve to graduate ready for a workforce where AI is a baseline expectation rather than unfamiliar with the tools their work will require. That is the standard we build to, and the standard we would welcome regulation holding every vendor to.
The two-year critical thinking study referenced above, "SchoolAI Makes Students Think," is available at schoolai.com/research, along with our broader evidence portfolio.

Transform your teaching with AI-powered tools for personalized learning
Always free for teachers.
Related posts

Guide: Administrator's guide to addressing parent concerns about AI adoption
Jennifer Grimes
—

How administrators track classroom progress with AI
Nikki Muncey
—

AI solutions for education: What actually works in modern classrooms
Fely Garcia-Lopez
—

AI in school districts: A roadmap for successful integration
Jennifer Grimes
—