It was all over my LinkedIn feed not too long ago.

A report by the Brookings Institution calling for “a new direction for students in an AI world” …

… and an NPR article about the report that cherry picked one line to use as its headline: “the risks of AI in schools outweigh its benefits”

I actually read this report and noticed that its findings are very similar to what so many of us have been saying about the impact of AI in education.

Of course, if you don’t read the report and only focus on a few attention-grabbing lines, you’d come to a certain conclusion …

AI is ruining education.

After going through the report, I have lots of questions. I still have optimism that AI can be powerful … but I also have my fair share of concerns.

Below, in today’s main section, let’s look at it and see what we can learn.

PS: I created a NotebookLM notebook with the whole 219-page report so you can ask it questions and listen to audio/video summaries of it.

In this week’s newsletter:

  • 🎙️ Book me for keynotes and teacher workshops

  • 📚 New AI resources this week

  • 📢 Your voice: AI academic integrity issues

  • 🗳 Poll: Do risks outweigh benefits?

  • 📊 Do AI risks outweigh benefits?

🎙️ Book me for keynotes and teacher workshops

I’m booking events for this summer and back to school right now!

I may be teaching part-time in the classroom, but doing keynote speeches and teacher workshops is still my main jam!

I’m fully booked for the rest of the 2025-2026 school year. But I have an opening or two in June — and we still have some slots available for back-to-school convocations in August!

There’s so much we could do …

  • ❤️ My keynote, “The Art and Science of Memorable Teaching,” is fun (we sing!) and helps teachers remember what they love about teaching.

  • ⚡ My keynote, “The Attention Switch,” is about student engagement and relevant learning. (We do a YouTube-style unboxing video together!)

  • 👓 My keynote, “Tomorrow Glasses,” helps teachers understand AI, how it can help them, and how to prepare students for the future — today.

  • 💻 My hands-on blended learning workshops are interactive and get teachers using tech tools in meaningful ways to spark learning.

Teachers love my presentations because they’re practical and relevant — and because of my perspective as a practicing teacher.

Interested? Want to learn more, get details, check availability or get a quote? Email Melanie ([email protected]) to learn more!

📚 New AI resources this week

1️⃣ AI in K–12: A New Year Reality Check for School Leaders (via Education Week) — An opinion piece urging educators to anchor AI adoption in core educational goals and avoid being swept up by hype without thoughtful planning.

2️⃣ Peering Into the Future: Look for These K-12 Education Trends in 2026 (via EdSurge) — AI’s expanding role — from classroom tools to policy focus — is featured among the key trends shaping schools this year.

3️⃣ What can U.S. schools learn about AI education from their Chinese counterparts? (via K12 Dive) — International perspectives on rapidly scaling AI literacy in schools offer insights for U.S. districts seeking to build robust AI curriculum and practice.

📢 Your voice: AI academic integrity issues

Last week’s poll: What would you address FIRST with AI academic integrity issues?

🟨⬜️⬜️⬜️⬜️⬜️ Stopping student misuse (7)
🟩🟩🟩🟩🟩🟩 Modeling appropriate use (38)
🟨🟨⬜️⬜️⬜️⬜️ Student-influenced norms and expectations (17)
🟨🟨⬜️⬜️⬜️⬜️ Discussing AI/human balance (14)
🟨🟨🟨⬜️⬜️⬜️ Well designed lessons that use AI responsibly (23)
⬜️⬜️⬜️⬜️⬜️⬜️ Other ... (2)

Modeling appropriate use: You'll never stop students from cheating. They've been doing that since the dawn of time. But if you show students how to use AI appropriately and hold them (and yourself!) to that standard, then that's what they'll do. Yes, you'll always have that one student who cheats, but you'll have the rest of the class using it appropriately. — Sam S.

Modeling appropriate use: Students and staff need to see how to best use AI instead of just tossing them in the pool and hoping they can swim! — D. Douglas

Other: Character Development. Integrity is a matter of the heart, so reach and teach to the heart ❤️ above all else, even over usability, norms, expectations, AI/human balance, and lessons. Carefully consider the user group to whom you're addressing and then use pathos, logos, and ethos in a variety of ways to reach all learning styles. Make integrity relative to all and in all circumstances of life. — R. Joslyn

Student-influenced norms and expectations: I strongly believe that engaging students in discussions about responsible AI use—specifically, when and how to use it—is essential to ensuring they become truly AI literate while also becoming AI capable. — B. Mischnick

Well designed lessons that use AI responsibly: I think this answer is the start, and modeling appropriate use is part of this. When we design lessons that use AI responsibly, we are modeling appropriate use. […] We can't just tell students to use AI appropriately and then not teach them how or what that looks like in our class. — Crystal Blais, Tech Integrator

What would you like to read in AI for Admins?

What’s a topic you’d like to see covered here? Hit REPLY to this email and let me know.

Have you done anything you’d like to share with the AI for Admins community? Hit REPLY and let me know.

Would you like to write a guest post to support and equip AI for Admins readers? Hit REPLY and let me know.

🗳 Poll: Do risks outweigh benefits?

Instructions:

  1. Please vote on this week’s poll. It just takes a click!

  2. Optional: Explain your vote / provide context / add details in a comment afterward.

  3. Optional: Include your name in your comment so I can credit you if I use your response. (I’ll try to pull names from email addresses. If you don’t want me to do that, please say so.)

What do you think: do AI's risks outweigh the benefits?

Afterward, explain your answer in a comment!

Login or Subscribe to participate

📊 Do AI risks outweigh benefits?

A recent report has interesting findings regarding AI in education. Image: Google Nano Banana

I usually value and respect NPR’s journalism (especially as a college journalism graduate myself).

I see where it was going, but if you dive into the actual Brookings Institution report (all 219 pages of it), you start to see something a little more optimistic — with more practical advice.

Get answers in this shared NotebookLM notebook

As I do these days with any big report I want to understand, I put it in NotebookLM and started asking questions.

You can open the notebook yourself and ask questions — as well as check out the audio overview, video overview, and infographic I created in it.

Is this report really all about risks?

The NPR article I mentioned does go back and forth with pros and cons, but it does lead with this comment:

It found that using AI in education can "undermine children's foundational development" and that "the damages it has already caused are daunting," though "fixable."

In addition to some summary by NotebookLM, I read through a good deal of the report with my own human eyes, and here’s my conclusion …

It’s saying a lot of the things that so many of us are saying. There are both positives and negatives … positives and negatives to the way it’s being used right now … as well as positive and negative paths forward.

Here’s the NotebookLM infographic summary of the whole report …

A NotebookLM infographic summary of the report.

Notice that it’s a scales, balanced between AI-enriched learning and AI-diminished learning. I think this illustration better captures what the report aims to communicate.

Below, I share some of my observations as I scrolled through the report.

Is technology really the culprit?

Early in the Brookings Institution report, it called out edtech in general as a culprit. These cries always make me want to dig deeper and see what the data actually shows.

Here’s what the report said …

“However, decades of investment and implementation have demonstrated that technology’s educational benefits have been mixed at best (UNESCO 2023). Multiple rigorous cross-national studies have shown that education systems investing heavily in technology do not necessarily experience improved teaching and learning outcomes (OECD 2015; West 2023).”

  • Note: The West article they reference is a UNESCO report about edtech and COVID-19 school closures. It seems kind of unfair to evaluate the effectiveness of edtech on a time when we had to teach with it full-time in an emergency capacity with no training.

The thing that’s missing with lots of these “edtech doesn’t work” claims — very rarely do they look closely at how the technology is being used. Is it being used as a replacement for paper worksheets? To do standardized tests? To fill in digital crossword puzzles? Or is it being used in ways that promote critical thinking, active learning, collaboration, etc?

The 3 P’s framework for responsible AI integration

The report offers three pillars for action:

  • Prosper: transforming teaching and learning experiences so that children and youth can thrive in an education system where AI is omnipresent.

  • Prepare: building the knowledge, capacity, and structures needed for students, educators […] to integrate AI ethically, effectively, and humanely.

  • Protect: developing and implementing safeguards on AI for student privacy, safety, emotional well-being, and cognitive and social development.

Context: Students are increasingly accessing AI outside of school

… and, the report says, families continue to be on the front lines.

Source: Brookings Institution report

“Little support is provided in terms of programs, outreach, or AI literacy for families and community organizations who care for children outside of school, yet the majority of families we interviewed want this support,” the report said.

Here’s a student quote from the report: “A lot of schools, including mine, blocked ChatGPT, but people will go on their phones and use Snapchat AI or Meta AI. People use Gemini, they use DeepSeek. Really, there are so many AIs that even if a school bans them, people are still going to be able to access it.

Source: Brookings Institution report

AI helps when we get the pedagogy right

I found this productive and helpful: a vision for what it looks like when we get it right.

“Research indicates that technology contributes most effectively to educational improvement when embedded within carefully designed and implemented strategies (Hardman et al. 2019). This requires several conditions.”

  1. Tools and platforms must be designed ethically and responsibly, grounded in the learning sciences.

  2. Schools and families must work in partnership to ensure that children’s AI use supports—not harms—their learning and development.

  3. AI tools must support human relationships, including the teacher-student relationship, using sound pedagogical practices designed to augment, rather than substitute student learning.

  4. Finally, educators and students must remain aware of both the benefits and harms these technologies present.

When these conditions align, AI has the potential to meaningfully enhance educational outcomes while minimizing risks to learners.

The report states that “AI can enrich learning if well-designed and anchored in sound pedagogy.”

It also points to teacher productivity, stating: “AI can optimize teacher time for greater focus on students.” This lines up with what I’ve been saying for years in my teacher workshops. If we use AI to shorten the amount of time we spend on drudgery and things that don’t require our humanity, we can focus it on things that maximize our humanity — including time spent with students. In that way, it’s helping us to use what makes humans special in a more effective way.

Other benefits listed by the report include:

  • improving equity by addressing gaps and expanding access

  • improving student learning (especially reading and writing)

  • tailoring learning to student needs

  • supporting neurodivergent students and students with disabilities

  • advancing assessment

One concern: Cognitive offloading

One section in the report dives into the concept of “cognitive offloading,” or using an AI tool to do valuable thinking that drives learning.

It states: “As AI tools continuously improve, they become increasingly seductive to use, creating what amounts to an existential danger to learning itself. Research reveals a strong positive correlation (r = +0.72) between excessive AI tool use and cognitive offloading, with younger participants who relied heavily on AI tools scoring lower on critical thinking skills than their older counterparts (Gerlich 2025).”

  • Note: I have questions about this Gerlich study. Participants self-reported their use of AI on a variety of tasks and how often they looked things up (rather than figuring them out themselves) on a questionnaire. The way it measures cognitive offloading is the way participants respond to these questions — and then they correlate those self-reported answers. This doesn’t seem like a very exact way to measure cognitive offloading to me.

This section of the study highlights a huge concern — that cognitive offloading can create markers that students are improving their learning (when, in reality, they’re not).

“For many students in this study, AI demonstrably improves their work and grades. It provides seemingly correct answers, simplifies and accelerates completion of tasks that students perceive as difficult, and enables them to fulfill what many view as education’s transactional nature—completing assignments for grades. Given this positive feedback loop and their developmental stage, many teenage students lack the executive functioning, metacognition, and self-regulation skills to recognize that learning involves friction and effort and that cognitive offloading poses both immediate and long-term developmental risks.”

Here’s something else that isn’t covered in this — using AI in ways that support cognition instead of offloading cognition.

It’s still up to the teacher (and the school) to design learning in an effective way. When we model how these tools can support learning — and design them effectively into learning when appropriate — it can model to students how to use AI to support their learning.

What can we do? 12 recommendations

At the end of the report, it gives 12 recommendations for all stakeholders of the AI in education dynamic. Below is a quick AI-generated list of those 12 — along with a one-sentence summary.

Pillar 1: Prosper

Focuses on transforming teaching and learning experiences so children can thrive.

1. Shift educational experiences in school. Schools must move away from transactional task completion and intentionally identify when and how AI should be used to align with proven learning sciences.

2. Co-create educational AI tools with educators, students, parents, and communities. To ensure tools are fit for purpose, developers must deeply engage users—including those in marginalized communities—in the design process rather than relying on superficial consultation.

3. Use AI tools that teach, not tell. Technology companies should optimize their products for children’s developmental needs by prioritizing designs that encourage active learning over the passive consumption often found in general-purpose models.

4. Conduct research on children’s learning and development in an AI world. policymakers and educators need rigorous, longitudinal data on how AI impacts students' cognitive skills, affective states, and motivation to inform effective programming.

Pillar 2: Prepare

Focuses on building the knowledge, capacity, and structures for effective AI integration.

5. Promote holistic AI literacy for students, teachers, parents, and education leaders. Education systems should integrate ethical understanding, critical thinking, and practical skills across curricula to ensure all stakeholders can engage with AI responsibly.

6. Prepare teachers to teach with and through AI. Pre-service and in-service training programs must be transformed to equip educators with the skills to use AI ethically and the confidence to preserve authentic student learning.

7. Provide a clear vision for ethical AI use that centers human agency. Systems should establish policies that prioritize student agency and critical thinking, vetting tools against criteria that promote human reflection rather than automation.

8. Employ innovative financing strategies to close the AI divide. Governments and systems must use equity-focused policies and creative funding mechanisms to ensure marginalized communities and under-resourced schools are not left behind.

Pillar 3: Protect

Focuses on safeguarding student privacy, safety, and well-being.

9. Break the engagement addiction and design platforms that are centered around positive mental health for children and youth. Platform developers should minimize manipulative engagement features and shift their success metrics from time-on-device to utility and well-being.

10. Establish comprehensive regulatory frameworks for educational AI. Governments need flexible frameworks that align AI governance with student rights, embedding accountability and ethical design standards across development and deployment.

11. Procure technology that protects students’ privacy, safety, and security. Education systems should leverage their significant purchasing power to buy only those technologies that feature built-in protections against data misuse and safety risks.

12. Support families to manage children’s AI use at home. Schools and civil society must provide families with quality information and support to facilitate conversations about responsible AI use and mitigate overuse outside the classroom.

Conclusion: Work through the hype

When I saw discussions about this report on social media, my initial thought was that it was all about how AI was ruining education.

I’ve seen the negatives myself, but I’ve also seen positives — and a promise of something better if we learn how to use these technologies effectively.

After reading the NPR article — and then diving deep into the Brookings Institution report myself — I found much more optimism and practicality from the report.

If you’d like to dive into it yourself, I encourage you to open the NotebookLM notebook I shared with the PDF in it and start asking questions (and exploring the resources in the studio).

I hope you enjoy these resources — and I hope they support you in your work!

Please always feel free to share what’s working for you — or how we can improve this community.

Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]

Keep Reading

No posts found