- AI for Admins
- Posts
- 🤖 AI literacies, part 1: Awareness
🤖 AI literacies, part 1: Awareness
Helping students (and teachers) understand AI

We’re knee-deep into education conference season.
I just got back from the FETC Conference in Orlando, Florida (see my slides and some presentation videos here). Next week is the TCEA Conference in Austin, Texas.
There’s a ton of talk about AI at these conferences, and a big topic is AI literacies.
How do we help students (and teachers) understand AI, know how to use it effectively, and be responsible with it?
I’ve been working with Holly Clark, the author of The AI-Infused Classroom, to develop an AI literacies framework. We call it the ACE Framework — Awareness, Critique, Exploration. (I shared the framework in this newsletter two weeks ago.)

The ACE Framework by Matt Miller and Holly Clark
For the next three weeks, I’d like to unpack the ACE Framework and share some key concepts — and how they can be implemented in the classroom.
This is part 1: AI Awareness
(PS: If this resonates with you and you’re interested in an in-person or virtual ACE Framework workshop for your school/district/event, email my business manager Jeff Miller at [email protected] to get details, availability and a quote.)
Let’s start it off this week with awareness …
In this week’s newsletter:
🤔 What makes you successful? Share and win $200 Amazon gift card!
📺 Free Curipod webinar TODAY!
🗳 Poll: What do we need to understand about AI the most?
♠️ ACE Framework part 1: Awareness of AI
🛠 Provide our AI Teacher Toolkit to teachers for FREE
📚 New AI resources this week
🤔 What makes you successful? Share and win $200 Amazon gift card!
Take a quick survey and let Vivi know about the structures and supports that help you thrive in the classroom.
Your feedback will guide school leaders in creating environments where teachers can succeed. Plus, Vivi will share findings from the survey with the Ditch That Textbook community.
As a thank-you, you’ll be entered to win 1 of 5 $200 Amazon gift cards! Winners will be notified via email on February 12th—don’t miss out!
Note: Your responses will remain anonymous and contact information won’t be used for marketing purposes.
📺 Free Curipod webinar TODAY!
Curipod (curipod.com) will create custom, interactive presentation slides for you in seconds.
That’s impressive enough in itself.
But there’s a lot more to Curipod — and the interactive digital lessons you can create for your students are almost endless. (Plus they’re quick to create!)
In this webinar TODAY, I’ll join Curipod co-founder Jens Seip to tell you all about it — and how much you’re able to do on the free plan! (Spoiler altert: It’s a lot.)
📆 WHEN: TODAY at 6pm US Eastern / 3pm US Pacific
If you can’t make it live, go ahead and register so you can get the replay link and watch whenever you want!
🗳 Poll: What do we need to understand about AI the most?
Our previous question: What AI/tech issue concerns you the most? (And why?)
🟨⬜️⬜️⬜️⬜️⬜️ Wearing AI-enabled devices in class (9)
🟨⬜️⬜️⬜️⬜️⬜️ Looking up answers with AI large language models (6)
🟩🟩🟩🟩🟩🟩 Outsourcing the thinking to an AI model (29)
🟨🟨🟨⬜️⬜️⬜️ Creepy relationships with AI chatbots (16)
🟨🟨🟨🟨🟨⬜️ Being poorly equipped with skills that really matter (27)
Some of your responses:
Voted “Creepy relationships with AI chatbots”: The developing brain is not ready to evaluate the reliability of information or advice nor the possible consequences of using an LLM AI as a "friend" or source of guidance.
Voted “Outsourcing the thinking to an AI model”: I want teachers to use AI, show students how to use it, and be sure they are encouraging the thinking process of the individual students. Each student has something in their brain that can influence the world.
Voted “Looking up answers with AI LLMs”: I am concerned that our students do not like discomfort of not knowing so they avoid having to think deeply by instantly resorting to an AI for answers.
Voted “Being poorly equipped with skills that really matter”: My assumption with this issue is that it's the students who are poorly equipped with skills such as critical thinking, collaboration, relationships, communication, empathy, learning from failure, inference, etc. These are skills that really matter, and they will be needed even more as our lives are inundated with AI and AI technologies.
🗳 This week’s poll
This poll question connects very closely with the ACE Framework post below!
Instructions:
Please vote on this week’s poll. It just takes a click!
Optional: Explain your vote / provide context / add details in a comment afterward.
Optional: Include your name in your comment so I can credit you if I use your response.
What do students/teachers need to understand about AI the most? |
♠️ ACE Framework part 1: Awareness of AI

The “Awareness” section of the ACE Framework
Artificial intelligence is top-of-mind for many of us now — especially in the last couple of years since ChatGPT was released to the world.
But that doesn’t mean that AI is anything new.
And it doesn’t mean that AI hasn’t already been around.
It was already having an impact on us in a variety of ways pre-ChatGPT …
We were already asking Siri and Alexa to do all sorts of stuff for us
Social media algorithms were determining what we saw (and didn’t)
GPS map apps gave us turn-by-turn instructions using AI
Netflix, Amazon and others used AI to make recommendations based on previous activity
Google and other search engines used AI to provide more relevant search results
AI tools screened resumes and analyzed candidates
Fraud and cybersecurity tools worked in the background to detect threats and unusual activity
The bottom line: AI is all around us. And it has been … even before ChatGPT.
The better we understand it, the better we’ll know how to act.
That’s the key to the “Awareness” section of the ACE Framework — and it’s not just for students. It’s as relevant to teachers, admins, and others in the school setting as students.
Here are a few ways we can be more aware of AI and better understand it — and how it translates to schools:
1. AI learns about the world and humans through its dataset.
What to know: When AI developers create an AI model, they feed it tons and tons of data. ChatGPT was trained on 300 billion words, coming from a variety of sources like: publicly available internet sites, academic journals, online books, Wikipedia, Reddit, and others.
When AI trains on data, it’s breaking it down into tiny bits called tokens and identifying how they all connect. It works a lot like the brain’s architecture — nodes and the synapses that connect them with synapses strengthening as they’re used more often.
So, the data that AI is trained on is just the start. It doesn’t store and retrieve data. It’s learning from it so it’s able to use those learnings when it interacts with us.
How it shows up in schools: Students don’t always know the difference between an AI large language model like ChatGPT and a search engine like Google.
It’s understandable. Google and ChatGPT don’t look much different. When you open the site or app, it’s a text box where you can type what you want.
But how they handle what you type? That’s very different …
Google indexes pages on the web and retrieves them, using an algorithm to deliver what it thinks are the most relevant pages (most likely to be what you want)
ChatGPT takes all of its training and makes its best statistical guess of the right answer
So, when students say “I looked it up online,” it matters where they looked it up — and how they use what they find.
Questions to ask:
What tool do you use to get answers — and why do you choose it?
How are the results different from a search engine (like Google) and a large language model (like ChatGPT)?
How could the tool that you choose provide you different results?
How could search engines potentially provide you inaccurate information? What about large language models?
How can you use either / both cautiously to minimize inaccurate information?
2. AI determines what to tell us based on statistical majorities.
What to know: We ask a large language model (LLM) like ChatGPT to do lots of things for us — answer questions, write emails, make lists, consider possibilities, give advice.
It’s all done in the background based on what the LLM has learned through its dataset and training.
When it creates a response, it is often giving us its best statistical guess for what the right answer is.
(In fact, on a smaller scale, that’s how it’s interacting with us in language, too. Natural language processing (NLP) is why AI sounds so human, and it predicts language bit by bit, making its best statistical guess of what word comes next.)
Often, its best statistical guess is right. But sometimes it isn’t. And sometimes it’s a little skewed or imbalanced.
How it shows up in schools: Because AI can be inaccurate, biased, skewed or even just different from what we want to teach, it’s important to have human oversight. Some call it “the human in the loop.”
AI can “hallucinate,” making false statements that it claims are true. AI gets better about this every day, but it still happens — especially when we ask it detailed questions about something on which it doesn’t have very much data. It’s forced to guess, and sometimes it’s wrong. Sometimes, it digs its heels in and claims that it’s right — even when it’s wrong.
It’s important for teachers to have that oversight with students as well. When students have unfettered access to ChatGPT or other major commercial LLMs, the only “human in the loop” is a student. (And we know how vigilant students are on vetting their sources …)
That’s why I’m a fan of K-12 AI student chatbot tools like SchoolAI, MagicSchool, Brisk, Flint, and others. They provide human oversight with student/AI transcripts and AI analysis of student interactions.
And, on a larger scale, this is why we shouldn’t rely on AI models as our first, foundational source of information for students (at least, without teacher approval). We can’t turn over foundational learning to a guessing machine (at least not today’s guessing machines).
Questions to ask:
How do we know what we’re learning is right and accurate?
How are we staying vigilant about ensuring that responses from AI are accurate, fair, and balanced?
What sources should we use to double-check facts? And how do we know that those sources are reputable?
What triggers the “red flag” in your mind that something AI produced might be wrong? How do we further and better develop that skill?
3. AI developers make tons of adjustments on how AI interacts with us.
What to know: The ChatGPT you interact with today is different than the one that was first released in November 2022.
In fact, the current model of ChatGPT (4o) is different today than it was when it was originally released.
AI developers can adjust their models in lots of ways. One lever they can pull is parameters. They’re like little virtual knobs to turn to adjust the creativity, the tone, and the output of an AI model. ChatGPT has 100 trillion parameters that can be adjusted.
The bottom line: Not all AI models are alike. They’ll provide you responses in different ways based on the data they’re trained on and how the developers have adjusted their outputs.
For example, of the four major AI models (ChatGPT, Google Gemini, Microsoft Copilot, Anthropic Claude), I’ve found Claude to be the most chatty … and Copilot to be the most concise.
How it shows up in schools: Not all AI-based tools are alike. Many of the K-12 AI products like MagicSchool, SchoolAI, Brisk, Flint, and others use a combination of major AI models to power their tools. If one of those tools uses one of ChatGPT’s models, its responses will look and feel and sound like ChatGPT.
These tools can provide additional layers of training and data to inform the model. For instance, MagicSchool’s teacher chatbot, Raina, has additional layers of training about pedagogy, classroom management, best practice, etc. But it’s still running one of the major AI models as its core engine.
Also, as educators, we can see the differences among major AI large language models. Give the same prompt to multiple AI tools and see how they respond differently. We might find that we like a certain AI model better than others for certain tasks.
Questions to ask:
How do different AI models respond differently?
How can we tell the differences among them?
How might certain ones be better for certain tasks?
In what ways are certain AI tools lacking (in ways that others might not be lacking)?
This is just the start!
Understanding how AI works can open our eyes and help us make smarter decisions about our AI use — and whether to use it or not.
I have sooooooo much more to say about this. (In fact, I did a quick brainstorm of major topics for this section and came up with five of them very quickly … but then realized after three that I had already written too much!)
(And if you’re curious about what the fourth and fifth topic were going to be for this section, they are …)
AI models are made to do certain things — but they might do other things for us, too.
Today’s AI is the most rudimentary AI our students will use.
Next week, we’ll dive into Part 2 of the ACE Framework: Critique.
🛠 Provide our AI Teacher Toolkit to teachers for FREE
We’ve created an AI Teacher Toolkit to support teachers in their understanding of AI and their use of it. It includes …
Links to 40 AI tools
Simple and efficient “By the Way” lessons
Ready-to-use lessons and units
Ready-to-edit prompt templates
AI tips for parents
Want to make this toolkit available to teachers in your school or district?
It’s super easy …
Just copy/paste this message into an email (in italics below) and send it out to teachers …
—
I’ve found this free AI Teacher Toolkit that has tons of tools, free lesson plans and ideas you use to save time and help you teach.
It has 40 AI teacher tools to save you time and help you prepare.
It has pre-written prompts you can copy and paste into an AI assistant like ChatGPT.
It has lesson ideas — and even shows you how to support students in their understanding of AI without needing special training.
—
Can I just download the toolkit PDF and send it to teachers? I’d prefer that you not do that. When they download the toolkit, they’ll be subscribed to my Ditch That Textbook email newsletter. It’ll provide them practical teaching and tech ideas that’ll support them in planning and teaching. That way, I’ll be able to support them twice a week with new teaching ideas.
📚 New AI resources this week
1️⃣ Choosing AI Tools for Schools (via Digital Learning Podcast): Schools are having to make tough decisions about long-term contracts about AI tools. How can you tread carefully and be smart? We weigh in.
2️⃣ 4 Ways to Use LearnLM as a Professor: This professor shares how he’s using LearnLM, Google’s new learning-centric AI model, to support his work with students.
3️⃣ What to Know about DeepSeek and How It Is Upending AI (via The New York Times): DeepSeek, a Chinese AI start-up, is turning heads with its powerful AI model.
I hope you enjoy these resources — and I hope they support you in your work!
Please always feel free to share what’s working for you — or how we can improve this community.
Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]