- AI for Admins
- Posts
- 🤖 Why is AI writing feedback so bad?
🤖 Why is AI writing feedback so bad?
Plus: Cover of my new book and two events to check out

My new book has a face!
My designer took the cover brief I provided him (some details and a general summary of what the book is about). And he designed a first draft of the cover!
I love it for several reasons — and also know that it won’t work.
See what you think (paired with my AI for Educators book cover) …

New (draft) book cover on left … first book cover on right.
I LOVE how alike they are … like two parts of an overall series. I love how he incorporated the design and style elements in my AI branding.
Here’s the problem …
People will look at the two and ask: What’s the difference between the two?
Or they’ll see the new book and think: Oh, I read that book already.
(Side note: If the yellow colors look different, it’s just the rendering of the image. They color schemes will match.)
I’m thinking about asking for a YELLOW background for the new one — that same yellow/gold color I use in, well … just about everything. 😂
Hit reply and tell me what you think! (Also: What about that subtitle?)
Also … I’d love your feedback on feedback! In today’s newsletter, I share how I’m generally disappointed in AI edtech tools that provide writing feedback. I’d love to hear about your experience … why you think it is … and any solutions.
In this week’s newsletter:
📺 FREE EVENT: Curriculum x AI Summit 2025
📚 New AI resources this week
📢 Your voice: Lack of teacher oversight in Gemini
🗳 Poll: Why is AI writing feedback so bad?
✍️ Update on my new book, AI Literacy in Any Class
✍️ Why is AI writing feedback so bad?
📺 FREE EVENT: Curriculum x AI Summit 2025

This message is sponsored by Toddle.
I’m excited to share that I am speaking at the Curriculum x AI Summit 2025 – a free, virtual event designed for public school educators.
Attendees will learn how AI can help us strengthen assessment, support differentiation, and build more connected systems across classrooms, staff, and communities.
It is happening from October 27th-30th everyday between 1:00 and 3:30 PM ET. You can see the full agenda and register here.
Key themes:
Build curriculum and assessments that foster creativity, connection, and critical thinking.
Align district systems with instructional priorities for safe and effective AI use.
Create coherent systems that support teachers, engage communities, and expand what works.
A PD certificate will be provided for attendance, and session recordings will be shared following the event.
📚 New AI resources this week
1️⃣ How to create and use Google Gemini Gems in the classroom (via Ditch That Textbook) — Custom Gems for teachers and students and how it fits in Google Classroom.
2️⃣ FREE EVENT: The Everway EDU Summit — The Everway EDU Summit is your essential gathering, connecting education leaders and dedicated educators to strategize, innovate, and lead the charge in advancing truly inclusive learning experiences for every student, everywhere.
3️⃣ Pupils fear AI is eroding their ability to study, research finds (via The Guardian) — One in four students say AI ‘makes it too easy’ for them to find answers
📢 Your voice: Lack of teacher oversight in Gemini
Last week’s poll: How do you feel about the lack of teacher oversight in students' Google Gemini?
🟩🟩🟩🟩🟩🟩 Teachers should have easy access to everything. (44)
🟨🟨🟨🟨⬜️⬜️ Gemini should at least provide teachers some insights. (34)
⬜️⬜️⬜️⬜️⬜️⬜️ That's the job of the content filter, not Gemini. (0)
⬜️⬜️⬜️⬜️⬜️⬜️ It's fine. We don't need to be surveilling students anyway. (0)
⬜️⬜️⬜️⬜️⬜️⬜️ Other ... (3)
Teachers should have easy access to everything: This is why I prefer a resource like MagicSchool where teachers have full oversight over student-AI chats. TikTok teaches students how to cheat with AI, not how to use it as a thinking partner, and teachers aren't equipped to teach AI literacy (at least not until they read your book:) — L. Crunk
Matt’s response: Exactly … we have a culture to shift! Social media is quietly whispering to students what AI means in education. If we want to use it for good, we have work to do to change the message.
Teachers should have easy access to everything: If Gemini will not show individual student chats, it needs to at least provide teachers will generate insights such as what learning students struggled with to help educators determine next steps. Until Gemini does this, many educators will likely continue to use other AI tools like MagicSchool, Brisk Boost, or SchoolAI. — J. Boll
Matt’s response: Yep, I believe that’s called data-driven instruction. It’s silly to me to have data and insights collected by the app — and it isn’t shared with the teacher to guide instruction.
Gemini should at least provide teachers some insights: Realistically, most teachers are not going to spend much time looking at any of the data. As much as I would, what's more important is getting insights to what students are using Gemini for and what that data could mean for how my classroom instruction looks. — A. MacLeod
Matt’s response: This is a good point. We feel that we should have access to all of the chat transcripts, but how often do we actually read them? Some overall broad insights would be a good actionable step. Teachers are already strapped for time as it is.
Update on my new book, AI Literacy in Any Class
I’m on fall break this week. I’m blocking off big chunks of time to keep working on the book!
Word count: 12,378 words
Goal: 20,000-25,000 words
Chapter I’m working on now: Analyze and critique
Summary: We can analyze and critique AI-generated content to learn about our content/curriculum in our classes — but also to improve our AI literacy.
Premise of the book: Our students need AI literacy to prepare them for the future. But we can’t just say: “Oh, that’s the tech teacher’s job.” We can integrate AI literacy lessons into any class — and it can actually support (and strengthen!) learning on the lesson of the day. This book provides concrete examples of how to build AI literacy AND strengthen instruction at the same time.
Publication goal: December 2025
🗳 Poll: Why is AI writing feedback so bad?
Instructions:
Please vote on this week’s poll. It just takes a click!
Optional: Explain your vote / provide context / add details in a comment afterward.
Optional: Include your name in your comment so I can credit you if I use your response. (I’ll try to pull names from email addresses. If you don’t want me to do that, please say so.)
Why is AI writing feedback so bad? |
✍️ Why is AI writing feedback so bad?

Students and teachers may find AI feedback less than helpful. (Image: ChatGPT)
When ChatGPT was released and we started to see what AI large language models were really capable of, this was one of the promises for education …
Writing feedback.
AI that could help us grade papers. Provide feedback on essays. Give feedback directly to the students so they could use it immediately to improve their writing.
In my experience, today … the AI writing feedback experience has been, well …
Underwhelming.
I’ve found this to be the case with K-12 education-specific apps AND the big frontier LLMs like ChatGPT.
They’re OK. Good in spots.
But by and large, I just haven’t found anything that provides high-quality feedback.
I’ve been thinking about it, and I think there are a few reasons why.
(Would love to hear your experience/opinion … hit reply to this email or respond in the poll above!)
1. We don’t provide them enough context.
This isn’t everybody, and this isn’t the full answer.
But I’ve watched enough educators work with edtech products over the years to see how some will operate.
They’ll plug in some student work and won’t provide a rubric, won’t provide details on what they’re looking for … basically upload the document and hope that the edtech app will read their minds and provide the kind of feedback that they want.
The idea that AI can save you time and do things for you can be true — if it has enough training and background information to do the job.
When we take the time to try and test it … then refine our inputs … we can get better results.
Again, this isn’t everybody, but I think it’s a good place to start.
2. The AI models provide the same answers over and over.
This one isn’t our fault, but the developers need to work on this — or maybe we need to incorporate it into our instructions.
I worked with one edtech tool (that shall remain nameless) that provided student writing feedback. Every time I would use it — or show it in a presentation — the feedback would give me the same suggestion …
“You need to provide more examples.”
Was it wrong? Not entirely. But examples aren’t the magic elixir that transforms poor writing into good writing.
Our classrooms aren’t an edtech demo.
In a demo, they can show you the coolest feature once and it looks impressive. But when you use it over and over again — dozens of times in a grading period, even — it needs to be adaptive and responsive to our needs.
When it gets stuck in a rut and gets repetitive, the students notice — and suddenly, it isn’t helpful anymore.
3. The pedagogical bias is baked into the product.
AI models operate based on the data they’re trained on.
When you scrape all of the available web, is all of that data going to support research-based pedagogical practices?
Nope.
AI models train on lots of data that reinforce stereotypical views of education — lectures, worksheets, teacher-centric, etc. The “research-based pedagogical practices” crowd is pretty small on the internet, and the results show in the responses that AI models provide — especially to very general prompts.
In the paper — Pedagogical Biases in AI-Powered Educational Tools: The Case of Lesson Plan Generators — the authors lay out a claim:
Through analysis of 90 lesson plans from commercial lesson plan generators, we found that AI-generated content predominantly promotes teacher-centered classrooms with limited opportunities for student choice, goal-setting, and meaningful dialogue.
It’s like the old computer programming saying: garbage in, garbage out.
If we aren’t getting clear with our AI tools about what we want — what we aspire to — they won’t step up and suggest it.
(Spoiler alert: The authors suggest that better prompting can return better results. I’m planning on writing about this next week!)
4. We ask AI to play too big a role in providing feedback.
There’s an ad for an edtech product that has always rubbed me the wrong way. (Good product overall … but I don’t like this messaging.)
It says: “Hate grading papers? <Product name> can help.”
I think this underlines the bigger issue. Some teachers expect to be able to outsource writing feedback to AI.
Reality: That’s taking the human teacher out of the mix … and we still want (and need!) human teachers now (maybe more than ever).
Plus: When the human teacher isn’t reading and providing feedback, how do they know that the student is learning and progressing? We lose touch with the student’s work and where the student can grow.
So … what can we do?
Here are a few suggestions to improve the situation …
Solution #1: Inject your values and instructions wherever you can.
With AI feedback tools (and, really, any AI edtech apps), I’m always looking for the “customize instructions” box. Look for anyplace that offers a paragraph-length text field where you can tell it specifically what to do.
This is our place to give text directly to the AI model! Even if the app is asking for something specific in that text field, sometimes I’ll pack it full of extra details that I want included. (Think of it sending a package full of directions but slipping a little note in the package with a secret message for the AI model.)
Solution #2: Use AI as a first line of feedback.
This goes back to “We ask AI to play too big a role in providing feedback” above.
Consider using AI for part of your feedback — and then filling in the gaps with your own human judgment.
This way, AI is doing what AI can do well. But as the teacher, you’re making sure that the feedback meets a certain standard.
It saves you some time and effort as a human teacher — but it still maintains your human judgment. (And that’s really probably the reason why we asked AI to help provide feedback in the first place!)
Solution #3: Work in smaller chunks.
Let’s start with a general understanding about AI. Let’s talk about context windows.
The context window is how much information you can provide to an AI model in your prompt. Context windows have grown a ton since the big release of ChatGPT almost three years ago. But they still have limitations.
Example: I’m working on my new book, AI Literacy in Any Class. When I finish my manuscript, I’m going to ask an AI assistant to help with some proofreading and copy editing.
What I won’t do: Give it the whole document and tell it to get to work.
What I will do: Give it a section of a chapter at a time.
Why?: AI responses are only so long … and the context windows are only so big.
Give it a big chunk of text? Its response won’t be very comprehensive. It’s trying to its response over that entire long document, so your responses will be an inch deep and a mile wide.
Instead, if you’re able to go a section at a time, you can get more detailed responses.
(Although this takes more time and effort as well … so finding a happy balance will be key!)
Let’s discuss this!
What are the other reasons why AI writing feedback isn’t great?
(Or do you have a different experience — that AI feedback has been pretty good?)
What can we do to make it better?
Hit reply to this email — or respond in the poll question above. I’ll share responses in next week’s newsletter!
I hope you enjoy these resources — and I hope they support you in your work!
Please always feel free to share what’s working for you — or how we can improve this community.
Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]