<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=57860&amp;fmt=gif">
Artificial Intelligence

The Missing Piece in Your District's AI Strategy: Governance

Rich Henderson
Rich Henderson
The Missing Piece in Your District's AI Strategy: Governance

SHARE

SHARE

Rich Henderson, VP of AI Solutions at Panorama, is an influential K-12 education technology expert, leveraging decades of experience and a special education background to drive global solutions. Rich guides districts through digital transformation, particularly the responsible, ethical integration of Artificial Intelligence (AI), providing insights on sustainable AI strategies and cultivating essential AI literacy for staff & students worldwide.


AI is on the agenda in nearly every district leadership meeting right now. The tools flooding classrooms promise to save teachers time, move faster, and do more with less. Adoption is rising: recent surveys show a growing share of teachers are already using AI in some form to support planning, lesson creation, and communication.

Why aren’t student outcomes improving?

In the push for speed and broader AI use, something important is getting overlooked: Quality

AI is remarkably good at producing work that looks polished on the surface. But polished doesn’t always mean effective. A lesson plan can appear complete, yet miss the rigor of the standard it’s meant to teach. An intervention plan can sound clear and structured, yet fail to address what a student actually needs.

There is an AI quality problem in K-12 right now, and it’s more urgent than most leaders realize.

AI Is Very Good at Looking Right—and That Creates Risks 

AI doesn’t struggle to produce something that looks coherent. Give it a prompt, and it will return something structured, logical, and confident—output that reads like something an educator would produce.

But that’s not the same as being right, useful, or aligned with education context and purpose. For example:

  • Writing feedback that sounds supportive but gives a student nothing actionable
  • An IEP that looks thorough but doesn't meet state compliance requirements
  • Career guidance that sounds personalized but has no connection to what's actually available at a student's school

It all looks usable. But in many cases, low quality AI can introduce legal and compliance risks, or gaps in the support students are supposed to receive.

Part of the problem is context. Generic AI tools don’t know your students, your standards, or your district priorities. Without that context, even strong looking outputs can miss in ways that aren’t immediately obvious.

And when that happens at scale, it becomes a system wide problem. Districts lose control of coherence, rigor, and compliance. Inconsistent, low quality AI across an entire district creates variability in how students receive support and could actually move us backwards and make outcomes worse for students.

High quality and actionable AI requires the ability to bring in student data, district materials, and local context—safely and securely, within a walled environment with education controls to protect student privacy.

This is a Governance Problem—One That's Solvable 

There is amazing potential for AI in schools. AI can help support IEP development, streamline communication, draw insights on student success, and a range of other use cases, including making it easier for teachers to deliver timely, personalized feedback. The challenge is that there’s a wide gap between high quality and low quality AI, and that gap creates a governance problem. When your district rolls out AI, how do you ensure it actually meets a high bar for quality?

Schools already know how to manage this kind of challenge. When it comes to curriculum, instruction, or school safety, leaders don’t rely on shallow or surface level effort. They set clear standards, evaluate against them, and hold systems accountable over time.

AI needs the same approach.

That starts with asking different questions. Not just “Are educators using this?” but:

  • What problems are we trying to solve?
  • What does good look like in our context?
  • How will we know if this is actually working?

It also means ensuring the AI tools brought into schools have the right context, safeguards, and data protections in place to produce outputs that are not just fast, but genuinely useful.

The Standard Has to Change 

Hot take: right now, adoption is moving faster than judgment. Tools are getting into classrooms without a shared understanding of what good looks like, or how to tell the difference.

In my opinion, this is a leadership responsibility. District and state leaders are already accountable for curriculum quality, instructional standards, and student support. AI needs the same level of oversight and thoughtful approach. We should be asking:

  • What problems is our district actually trying to solve with AI?
  • What does high quality output look like in those moments?
  • How will we know if the tools we’ve adopted are actually delivering on that?

The systems that get this right won’t be the ones that use the most AI. They’ll be the ones that define what high quality looks like first and then scale toward it with intention. That’s what it means to govern AI quality.

And it’s the big question every district leader should be asking right now: Are we steering AI in a way that actually improves outcomes for students, or are we just trying to move fast?

Related Articles

Join 90,000+ education leaders on our weekly newsletter.

Join Our Newsletter

Join 90,000+ education leaders on our weekly newsletter.