- AI for Admins
- Posts
- 🤖 How to combat pedagogical bias in AI
🤖 How to combat pedagogical bias in AI
Steps we can take to keep learning moving forward

In today’s newsletter, we’re going to talk about pedagogical biases in teacher AI lesson planning apps. In fact, you can jump right in and …
But first, I have to tell you some exciting news …
I have a cover for my new AI literacy book: AI Literacy in Any Class!

The new “Ditch That Textbook yellow” cover!
UPDATE: I crossed the 20,000 word mark in my manuscript last week during fall break. My goal has been 25,000 words. I have four chapters left to write — which I plan to keep on the shorter side!
BACK COVER TEXT: The designer has created the back cover as well. Here’s the summary I asked him to use:
Artificial intelligence is already here — in our world, our classrooms, and our students’ pockets.
The question is: are they ready for it?
AI Literacy for Any Class is a practical guide for K–12 teachers who want to prepare students for an AI-shaped future without adding more to their plates. Author and educator Matt Miller shows how to weave AI literacy into everyday teaching through small conversations, “by-the-way” lessons, and classroom-ready examples for every grade and subject.
Equip your students — and strengthen your teaching — for a world where curiosity and critical thinking matter more than ever.
TIMELINE: I’m hoping to finish up the manuscript by the end of the month. If editing and design happen in November, I’m hopeful to have it published in December — just in time for the Ditch Summit! (More on that below.)
In this week’s newsletter:
📺 Ditch Summit, a free virtual conference for educators
📚 New AI resources this week
📢 Your voice: Why AI feedback is so bad
🗳 Poll: Identifying pedagogical biases
⚖️ About AI “pedagogical biases” and how to avoid them
📺 Ditch Summit, a free virtual conference for educators

Register for the Ditch Summit at DitchSummit.com
📣 Announcing: The Ditch Summit speaker lineup!
It's our 10th year for this free online conference for teachers.
Dates: Dec. 15, 2025, to Jan. 11, 2026.
FREE PD certificates, too!
More details coming soon …
📚 New AI resources this week
1️⃣ Microsoft announces new AI-powered teaching and learning (via Microsoft) — Includes Copilot at no additional cost for educators and students, Study & Learn (a Microsoft AI Agent), and more.
2️⃣ An Opinionated Guide to Using AI Right Now (via One Useful Thing) — When Ethan Mollick updates this guide, it’s always must-read. It shows his preferences to available AI tools today.
3️⃣ How People Use ChatGPT — This paper is an interesting look at how people are really using LLMs. Spoiler alert: Writing dominates work-related tasks. (Shocker.)
📢 Your voice: Why AI feedback is bad
Last week’s poll: Why is AI writing feedback so bad?
🟨⬜️⬜️⬜️⬜️⬜️ The AI models aren't good enough (3)
🟨⬜️⬜️⬜️⬜️⬜️ The apps don't seem to be developed well (3)
🟨🟨🟨🟨⬜️⬜️ We aren't providing good prompts (7)
🟨🟨🟨🟨⬜️⬜️ Pedagogical bias is baked in (6)
🟩🟩🟩🟩🟩🟩 Something else ... (9)
I usually include snippets of longer responses, but this one by Tia Miller was so good I had to include the whole thing!
It's a little of all those things, but something else, too. I think it has to do with the human aspect.
A human reading the writing of another human can have human reactions and emotions. And human writing is messy and varied and unpredictable. What makes good writing good is not so easy to pin down. A computer that merely puts words in a row because that seems like the most likely answer cannot adequately react to the writing of a real, messy, varied, unpredictable human.
I also find that AI tends towards the excessively cheerful and positive. I once tried an AI tool that promised to learn my style of feedback. I gave feedback to a few papers, then it suggested feedback for the rest of them. It never did learn my style. It was way too effusive for one thing: "That is a terrific thesis!" When I'm thinking "it's an o.k. thesis for a first draft, but you can do better."
Me believing the writer can do better is something AI cannot do. I have access to daily contact with this human being. I see their successes, their struggles. I learn their personality. I hear their writing voice. I know what they are or are not capable of. I can take all of these cues and confidently say, "you can do better." AI can't do that.
Something else: I find that, too often, AI wants to please you and praise you (so you continue to use it), but that often makes it not really critical enough. — Jessica Nelson
Something else: Prompting is of course a major part of this, but a bigger issue is that most AI Models are designed to make the user happy. Yes, it gives feedback, but it tends to be either vague, out of context (prompt issue) or cookie cutter (development & prompt). Unfortunately though, the overall message is "it's great" because it's designed to do that, not because it is... from a fundamental standpoint, it's SO BAD because it removes the human from the loop! Teacher feedback on writing is (should be) a relationship builder between teacher and student around their growth. — A McCrumb
🗳 Poll: Identifying pedagogical biases
Instructions:
Please vote on this week’s poll. It just takes a click!
Optional: Explain your vote / provide context / add details in a comment afterward.
Optional: Include your name in your comment so I can credit you if I use your response. (I’ll try to pull names from email addresses. If you don’t want me to do that, please say so.)
Which pedagogical biases have you found in AI lesson plan apps? |
⚖️ About AI “pedagogical biases” and how to avoid them

AI’s pedagogical biases think this is what class looks like. (Image: Gemini)
Teacher-centric classrooms have dominated the education landscape over the last century.
Constructivism and technology integration have helped to push our thinking — and our practice — over the last 50 years to challenge the factory model. But it has only gone so far.
What’s the result?
Lots and lots and lots of data — all across the internet and in books and other information sources — talking about the stereotypes of the teacher-centric classroom.
The impact on AI models has been significant.
And we’re seeing it reflected in our AI lesson plan and teacher resource apps.
A report was published in April — Pedagogical Biases in AI-Powered Educational Tools: The Case of Lesson Plan Generators. It studied and identified “pedagogical biases” that popular AI lesson planning tools carried.
Per the report: “They also carry pedagogical biases, which encompass implicit or explicit beliefs, assumptions, and preferences regarding how teaching and learning should be organized, delivered, and assessed.”
In short: the things our AI assistants don’t say about their views on education — but show up in the results.
Those pedagogical biases include:
teacher-centered classrooms
limited opportunities for student choice
limited student goal-setting
limited meaningful dialogue
Today, I wanted to dive deeper into the findings.
Chat with NotebookLM about this report
I used NotebookLM to create some summaries of the report’s findings. You’ll find those NotebookLM summaries below in italics.
I’ll add my own observations and thoughts. You’ll find those in non-italicized regular text. (I also wrote the introduction myself.)
Also, you can chat with NotebookLM yourself to get your own answers to this report! I created a share link you can use to pull up the NotebookLM notebook yourself.
In addition to asking it questions, you can also find the following resources I created:
an Audio Overview (aka podcast)
a Video Overview (slides + voiceover)
a mind map of the report
Pedagogical biases identified in the paper
1. Bias Against Student Agency
The study found that AI-generated lesson plans scored poorly in fostering student agency. Student agency is defined as the capacity of students to set goals, reflect on their learning, and take purposeful action toward their growth and development, leading them to become active contributors to knowledge construction.
The bias against student agency manifested in the following ways:
Promotion of Teacher-Centered Classrooms: The lesson plans predominantly favored teacher-centered settings where educators maintained control over most teaching and learning activities.
Limited Student Choice and Goal-Setting: The plans offered limited opportunities for students to set goals, make choices, or take meaningful actions in their learning.
Absence of Key Agency Constructs: Important constructs of agency, such as goal-directed behavior, initiative, and shared authority, were largely absent in the generated content.
Restrictive Directives: Examples included specific directives like "Assign the worksheet or a set of problems and ask students to work quietly" or instructing that "Students will be divided into small groups".
Focus on Surface-Level Agency: While certain aspects of agency, such as interaction, opportunities to share ideas, and problem-solving, were sometimes represented, the overall framing remained restrictive.
Matt’s reflection: There’s definitely still a place for teacher-led instruction and discussion. But especially going forward — in a world where we have the world’s information at our fingertips and can get explanations from powerful AI models — we can’t and shouldn’t depend on this one-size-fits-all approach.
2. Bias Against Classroom Dialogue
The study also revealed a significant limitation in the AI-generated lesson plans regarding the emphasis on productive classroom dialogue. Dialogue, in contrast to monologic teaching, refers to structured classroom talk and interaction used to promote learning, engagement, and shared understanding, which is crucial for higher-order thinking.
The bias against dialogue manifested through the prevalence of restrictive, teacher-centric discourse structures:
Dominance of Rote Instruction: The most common type of talk delineated in the analysis was rote, characterized by structured repetition of key facts and concepts.
Emphasis on Instruction/Exposition: Following rote learning, instruction/exposition was highly prevalent, where educators maintain a directive role, conveying knowledge and providing explanations to the class, groups, or individuals.
Limited Discussion Scope: Examples of discussion were limited, typically involving students working in pairs or small groups only to answer teacher-led questions, with minimal opportunities for student-generated inquiries or extended, sustained conversations.
Absence of True Dialogue: The essential dialogue phase, necessary for deepening understanding and fostering reflection, was largely absent. For instance, lesson plans often halted group discussions without providing the necessary scaffolding or guiding questions.
Matt’s reflection: This is the good stuff! I’ve seen arguments in certain teacher circles against discussion, but it’s a great place for students to sort through ideas, to see what others think, to experiment with new ideas. And in a world where students communicate so much with their thumbs and tiny screens in their pockets, any opportunity for them to develop authentic human communication skills is really important.
3. Other General Pedagogical Biases
More broadly, the sources note that AI educational technologies tend to carry biases that reflect dominant societal norms and institutional priorities. These general pedagogical biases include:
Reinforcement of Outdated Practices: AI systems may exhibit biases toward outdated pedagogical practices that do not align with contemporary educational research.
Favoring Efficiency over Pedagogical Diversity: Technology often prioritizes efficiency, favoring tools that automate rote tasks rather than fostering creativity or deviation from dominant educational norms.
Cognitive-Centric Framing: Traditional educational technologies have often adopted cognitive-centric frameworks that emphasize cognitive outcomes and prioritize information delivery over critical thinking.
Repetitive Patterns: AI-generated lesson plans frequently follow repetitive, teacher-centered patterns.
What you can do to battle pedagogical bias
Matt’s reflection: Well, first of all, I hate that we should even have to battle against it. These traditional views of education — teacher-centric, little student agency — are so embedded in schools still today. We’re already battling those biases face-to-face in schools today.
But now, when we use AI lesson plan generators, those biases are reinforced even further. It would be easy for a teacher to say, “This is how I was taught as a student. This is how I see teachers teaching in my school. And these AI teacher apps say this is how I should teach. Why would I believe otherwise?”
The report has suggestions for AI app developers. Based on conversations I’ve had with people who work for these K-12 AI teacher app companies, I know there’s a lot of belief in more modern, progressive approaches to education.
It’s hard when all of the data the AI models have been trained on try to tell us otherwise. These app developers — some more than others — are trying to add extra layers of training on solid, sound pedagogy. Some are developing the user interface to explicitly include more modern teaching approaches.
I was glad to see that the authors of the report have suggestions for us on what we can do. I used NotebookLM to summarize them below (in italics) …
1. Mindful and Critical Engagement: Teachers must approach AI lesson plan generators with awareness of their limitations, recognizing that the generated content often follows repetitive, teacher-centered patterns and may exhibit biases toward outdated pedagogical practices.
2. Discerning Pedagogical Biases: Educators should be supported in actively discerning pedagogical biases—such as the lack of student agency (limited student choice, goal-setting, or shared authority) and the lack of productive classroom dialogue (dominance of rote instruction or exposition)—in the AI outputs.
3. Applying Intentional Prompt Engineering: Drawing on the successful experimental strategies, teachers can integrate contemporary pedagogical values directly into their prompts to guide the AI output:
Promoting Student Agency: Prompt the AI to include elements that foster agency, such as promoting self and peer assessment, enabling students to take on diverse roles, and offering more resources and choices (e.g., selection of presentation formats or research topics).
Enhancing Classroom Dialogue: Prompt the AI to integrate a dedicated dialogue phase that prioritizes knowledge building through collaboration. This includes requesting that teachers pose diverse questions to activate prior knowledge, encourage peer idea-sharing, and prompt critical thinking beyond textbook answers.
Awareness and action are key
Just knowing that these pedagogical biases exist are a good start.
Knowing what to ask for is also important — and a call for us to go back to the basics and ensure that we understand best practices for pedagogy and instruction.
It’s also important for us to keep human teachers in the loop. We have to ask: “Why might teachers mindlessly outsource lesson planning to AI in the first place?”
If it’s a time issue, let’s find whatever ways we can to give teachers the time that they need.
If it’s an understanding issue, let’s focus professional development on sound teaching practices and pedagogy so they know what makes for great learning.
It’s important to emphasize the role of AI lesson planning tools — a support, a guide, a suggester of ideas. But in the end, it’s up to the teacher to craft the learning experience that students need.
I hope you enjoy these resources — and I hope they support you in your work!
Please always feel free to share what’s working for you — or how we can improve this community.
Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]