
NOTE: Due to a technical glitch from my email management system provider, yesterday’s email wasn’t sent. Hopefully the issue is resolved and you’re receiving this now — on Thursday!
It’s finally done!
The manuscript for my new book, AI Literacy in Any Class, is in the hands of my graphic designer.
This week, I should get design samples for my approval. By the end of next week, I should have the final files to send along to the printer.
I’m hopeful that the book will be available for purchase by mid-March!

The cover of my upcoming book, AI Literacy in Any Class.
In weeks that lead up to the book’s launch, I’ll share with you some of the key concepts and what to expect.
But for now …
I’m giving a virtual keynote about AI and critical thinking tomorrow at Panoramic 2026, a virtual summit put on by Panorama Education. (More details below.) I’d love for you to join — or watch a replay if you can’t be there live!
In today’s main piece (below), I share some interesting findings from the OECD Digital Education Outlook (full PDF here) (my NotebookLM notebook here). Some takeaways really piqued my interest.
(Oh, and there’s a rant about what happens when we don’t look at research findings closely enough!)
In this week’s newsletter:
🎉 Join me at Panoramic 2026!
📚 New AI resources this week
📢 Your voice: AI + reading
🗳 Poll: Which one do you address first?
🧠 AI in education: The “mirage of false mastery,” “pedagogical intent” and more
🎉 Join me at Panoramic 2026!
I’m excited to share that I’ll be speaking at Panoramic 2026, Panorama Education's free virtual summit.
It’s happening TOMORROW — Thursday, February 26, 2026.
I’ll host a keynote session about AI and student critical thinking. (Did I mention it’s free?)
Panoramic is a full day of learning, inspiration, and practical ideas for teachers and leaders. You’ll hear from voices across education on how AI can support student learning, make teaching a little easier, and help create the kinds of classrooms where deeper thinking and creativity thrive.
And if you can’t make it live, every session is on-demand, so you can watch whenever it fits your schedule.
📚 New AI resources this week
1️⃣ Teachers Want ‘Guardrails and Guidance’ on AI Use, Experts Tell Congress (via EdWeek) — Newly released survey data showing that most teachers are using AI but calling for much stronger guidance and support from administration and policymakers.
2️⃣ Most Teens Believe Their Peers Are Using AI to Cheat in School (via Washington Post) — Latest Pew Research findings on student perceptions of AI misuse and implications for academic integrity approaches. [Most teens believe their peers are using AI to cheat in school
3️⃣ AI Challenges Core Assumptions in Education (via Stanford HAI) — Insights from the recent AI+Education Summit on rethinking assessment, literacy, and instructional models in an AI-rich context.
📢 Your voice: AI + reading
Last week’s poll: I talked about using NotebookLM to learn new material. How should that balance with actually reading?
🟨🟨🟨🟨🟨⬜️ Learning through AI is still learning. (14)
🟨🟨🟨⬜️⬜️⬜️ Reading texts is still superior. (7)
🟩🟩🟩🟩🟩🟩 AI is OK for review, but start with reading. (15)
🟨🟨🟨🟨⬜️⬜️ It depends on the subject / grade level. (12)
⬜️⬜️⬜️⬜️⬜️⬜️ Other ... (0)
Start with reading: I'm a huge fan of NotebookLM! With that said, we need to be cognizant as educators about not circumventing the reading process. Students need to read and grapple with text first, on their own, then turn to NotebookLM. For older more advanced students perhaps the transition from text to AI happens quickly, but the young and more foundational learners will really need us to make them use their own brain before supplementing it with NotebookLM. Educational value on BOTH sides of the learning experience to be sure. — M. Tornetto
It depends: Some subjects; like reading, language, English should require students to read. Subjects such as science, social studies, etc. lend themselves better toward using AI for research and understanding of content. — K. Hansen
Reading texts is superior: One needs to read for understanding. The knowledge gained from reading a summary is shallow and superficial at best. — A. Hewett
It depends: “While AI can assist, we must be careful not to let it replace the 'productive struggle' essential to learning. Once a foundation is built, however, AI becomes an invaluable thought partner for deeper exploration and complex skill-building, especially when a student's interests outpace traditional classroom resources.” — Rachel, Oklahoma
What would you like to read in AI for Admins?
What’s a topic you’d like to see covered here? Hit REPLY to this email and let me know.
Have you done anything you’d like to share with the AI for Admins community? Hit REPLY and let me know.
Would you like to write a guest post to support and equip AI for Admins readers? Hit REPLY and let me know.
🗳 Poll: Which one do you address first?
Instructions:
Please vote on this week’s poll. It just takes a click!
Optional: Explain your vote / provide context / add details in a comment afterward.
Optional: Include your name in your comment so I can credit you if I use your response. (I’ll try to pull names from email addresses. If you don’t want me to do that, please say so.)
Which concept from the OECD report would you address first?
🧠 AI in education: The “mirage of false mastery,” “pedagogical intent” and more

Some key takeaways from the OECD Digital Education Outlook 2026
As educational organizations continue to study the impact of AI in education, we keep adding to our growing knowledge base of best practices and research.
Today I’ll address a new report with some important takeaways. As always, with a report this long (247 pages!), I used NotebookLM to gather key takeaways.
Because it’s so long, I’m sure there are other important points … but below, I’ll share a few that caught my eye — and try to distill them down and keep them interesting.
ORGANIZATION: OECD (Organisation for Economic Co-operation and Development), based in Paris, with representatives from 38 democracies and market-based economies. Mission: To promote policies that improve the economic and social well-being of people worldwide, fostering prosperity, equality, and opportunity.
REPORT: The OECD Digital Education Outlook is the OECD’s flagship publication presenting our latest analysis of emerging digital technologies in education.
Concept 1: The Paradox of Empowerment

Source: OECD Digital Education Outlook; Infographic: NotebookLM
The “paradox of empowerment” is all about balancing the pro’s and the con’s. (Now you have a new term for it!)
We hear the pro’s all the time — from the edtech companies, the tech giants, the talking heads. AI can save time, provide personalization, support instruction.
The big con addressed here? Teacher autonomy and skill atrophy.
Teacher autonomy: Increasingly more decisions are made by AI instead of humans (by choice from leadership AND/OR by default when teachers don’t do their own human thinking)
Skill atrophy: When teachers don’t exercise their pedagogical muscles, they worsen — and their instruction suffers.
Concept 2: The Mirage of False Mastery

Source: OECD Digital Education Outlook; Infographic: NotebookLM
A study in the report highlights an interesting phenomenon …
Using AI can boost immediate performance on an assignment, but it can harm their long-term retention (as measured on a test).
When students perform well on formative tasks (assignments) but struggle on summative tasks (tests), they call this “the mirage of false mastery.”
The Türkiye Math Experiment: This study involved 1,000 high school students practicing math. They were divided into three practice conditions:
Studying alone using traditional course notes and textbooks
Practicing with a standard, general-purpose LLM (“GPT Base”)
Practicing with an educational LLM (“GPT Tutor”) trained to support learning and withhold direct answers
The results? The GPT Base students scored 48% higher on practice exercises; the GPT Tutor students 127% higher.
But on a closed-book test, GPT Base students performed 17% worse. The GPT Tutor students performed about as well as the self-study group — which signaled that their higher practice performance vanished.
The key factor the report points to? Cognitive offloading. (We’ve been talking about this for a while now.) When students let AI do the thinking, their thinking isn’t very good when they take a test. (Surprise!) This has to do with instructional design. We use AI in cases only when it makes most sense to support the learning that we want to happen.
Side rant: What are research studies REALLY measuring?
Something about this particular study gets me all riled up …
There will be lots of education AI research. The question we must ask: What are the conditions of the experiment? What are we measuring?
This OECD report makes broad “research-based” claims like:
For example, a field experiment in Türkiye found that while access to GPT-4 improved short-term performance – by 48% with the standard interface, and by 127% with a tutoring version designed to support learning – students performed 17% worse once access was removed, showing that generative AI can undermine learning unless explicitly designed to support skill acquisition.
I want details about the teaching, not just the technology!
So I looked up the journal article for this Türkiye Math Experiment. My question: What exactly were the students doing … and is it the kind of task where AI is actually really helpful?
Tell me about the teaching!
Here’s all I could find …
The study happened in the fall semester of the 2023-2024 academic year. (Fall 2023 … ChatGPT still wasn’t a year old.)
They conducted four 90-minute sessions for about 50 ninth-, tenth-, and eleventh-grade classes. In each class …
Teachers review a topic previously covered in class — “identical to a standard high school lecture.”
Students do assisted practice, solving questions on the topic covered in class. Teacher reviews correct answers with the whole class.
Students do unassisted evaluation — a closed-book, closed-laptop exam. Questions were very similar to those practiced during class.
During the practice session and unassisted evaluation, teachers didn’t interact with students.
The study goes on and on and on (as studies do) about how they built the GPT Base and the GPT Tutor that were supposed to support the students as they practiced.
They also went on and on and on about how they calculated the results and their interpretations of the data.
If you’ve taught human children in a real classroom, you might want to know the same thing as me here …
Tell me about the teaching!
How well did the teacher explain the concept?
Was the instruction well scaffolded?
Were students paying attention?
How is the relationship between the teacher and the students?
Has the teacher cultivated the “why?” of studying math — and are students bought in?
How much was really accomplished in four 90-minute sessions?
Were any of the days right before a break or during a holiday — or right before a big football game? (OK, probably not football in Türkiye, but you get my point.)
Also, tell me about the student practice …
How did students get support while they practiced? One part says it’s “assisted practice” while another says “Teachers did not interact with students during the second and third parts.”
When students practiced, how did teachers intervene when they didn’t understand?
What did the practice look like?
What was the feedback like?
Now, tell me about the AI that’s supporting the student. The paper shares the prompt for the GPT Tutor, and it raises questions, too …
“At first, you should provide the student with as little information as possible to help them solve the problem. If they still struggle, then you can provide them with more information.”
This is central to supporting a student as they learn a new concept! The prompt is incredibly vague, leaving lots of the important teaching work to an AI model that’s under-informed and under-trained to the exact kind of support the student needs.
(Let’s also remember this: The researchers built their “GPT Base” and “GPT Tutor” with GPT-4, the top AI model of the time. Today, ChatGPT 5.2 is the top model — several iterations better. We can’t use research with archaic AI models to inform instruction today and in the future.)
And yet, in this OECD study, they make sweeping claims that state that using AI can cause short-term gains but long-term declines.
All of this just further identifies a problem we’ve had with edtech for far too long …
Way too much time on the tech.
Way too little time on the pedagogy and learning.
It’s a bit like asking: If students sit in green chairs while learning instead of blue chairs, will they learn more in the green chairs?
I know, I know … I’m exaggerating a bit here (OK, a lot, but to make a point!) … but still, my main point stands.
If we’re going to learn when and if these technologies actually support learning …
the instructional design has to be at the center
the AI / technology must support specific goals of the instruction
OK, let’s do a few more quick points … and the next one finally starts to cover what I was just ranting about!
Concept 3: Pedagogical Intent
The report identifies that students can develop critical thinking, creativity, and collaboration with the use of generative AI as long as there’s “pedagogical intent.”
Use it as a cognitive scaffold instead of a replacement for student effort.
Some findings from the report related to this:
A UK study found that AI impacted creative output and writing quality. Students who brainstormed with AI outperformed those working alone in terms of creativity and writing quality. One key point: Students did the writing themselves after getting inspiration from AI.
Interesting negative effect: Individual creativity improved, but collective creativity dropped. As a whole, the stories written by the independent group were more unique, while the AI-assisted group’s work as a whole was more similar.
AI counterarguments and rebuttals in one study led students to higher levels of critical thinking awareness and collaborative tendency than a group using search engines.
These findings point back to that important concept — what are students doing? How is the learning designed?
(And also … how are we measuring learning and effectiveness?)
Concept 4: A “Tutor Copilot”

Source: OECD Digital Education Outlook; Infographic: NotebookLM
I thought this was a fascinating twist on the thing everyone’s saying about AI — that it’s a “24/7 tutor in students’ pockets.”
We don’t have to turn to AI to do all of the teaching.
Instead, we can use it as a “lifeline” (a la the game show Who Wants to Be a Millionaire?) when a teacher or a student struggles.
A program provided human tutors to students in underserved, low-income communities. They used an AI platform to support those human tutors when students struggled.
The platform was fine-tuned based on observations of how expert teachers provide feedback. Instead of interacting directly with the student, the AI interacts exclusively with the human tutor.
Tutors without the AI tended to rely on passive, solution-focused strategies, such as simply giving the student the direct answer or solution strategy. Tutors using the AI tutor support were successfully nudged away from giving direct answers and instead utilized expert scaffolding strategies, such as asking guiding questions and prompting students to explain their reasoning.
The “augmentation paradigm”: Another new term!
The human teacher takes in all of the context from learning and provides human judgment.
The AI processes data quickly (its strength!) and generates options.
The teacher applies the AI suggestions that make sense … in essence, “augmenting” their own human teaching abilities.
Concept 5: Collaborative Learning with AI

Source: OECD Digital Education Outlook; Infographic: NotebookLM
The report identified four helpful roles that AI can serve in collaborative learning:
Information Hub (Repository of Information): AI can act as a searchable repository that groups query to obtain the information needed to solve joint tasks, functioning similarly to a highly advanced web search.
Personalized Materials to Support Group Work: AI can monitor a group’s progress or results to generate tailored learning materials that deepen the collaborative process, or it can perform specific creative tasks on behalf of the group.
Teacher Feedback (Teacher or Facilitator): AI can be positioned "outside" the group to monitor interactions and provide scaffolding, feedback, and orchestration, much like a human teacher.
Peer Contributor (Artificial Group Member): Instead of guiding from the outside, AI can be integrated directly into the group as an active, participating peer.
Read the report
If you haven’t checked out the report, you can get the full text here.
There’s a quick summary with images and highlights here.
Here’s my NotebookLM notebook with the OECD report AND the Türkiye Math Experiment study. You can use the chat to answer questions — and you can view the infographics I’ve generated.
I hope you enjoy these resources — and I hope they support you in your work!
Please always feel free to share what’s working for you — or how we can improve this community.
Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]


