- AI for Admins
- Posts
- 🤖 AI literacies, part 3: Exploration
🤖 AI literacies, part 3: Exploration
What can you (and your students) do with AI?

I need your help.
In fact, I’m moving today’s poll up here because I so desperately need the help.
Today, I’m sharing the final of my 3-part series on the ACE Framework I co-created with Holly Clark. Today is about “Exploration” with AI.
Here’s where I need your help …
I need to know what to do with all of this.
If this ACE Framework is helpful — and it’s worth creating resources that’ll help teachers and students dive deeper — what’s the best use of my time?
Do I create short videos about concepts from the ACE Framework — videos for students and/or teachers?
Do I create an online course that teachers can take to understand how to apply ACE to their work in the classroom?
Do I develop a full-day in-person workshop for schools and districts?
Do I run a series of online trainings that any teacher, school or district could participate in?
Do I write a book about AI literacies? (Holly and I have talked about it!)
Something I didn’t mention?
… or, should I focus my efforts elsewhere (NOT on AI literacies)?
The poll is below. PLEASE vote for your favorite — and in a comment, tell me anything else (other items in the list you like, something I didn’t mention, feedback on any of it).
In the “AI for Admins,” YOU are the one on-site that’s helping guide decisions on AI … otherwise you wouldn’t be here.
I’d LOVE your quick feedback. (And if you have more to say than a quick comment, please hit reply to his email and let’s discuss.) Please and thank you!
What should Matt do with the ACE Framework?Choose your favorite, then add more + feedback in a comment |
In this week’s newsletter:
📺 WEBINAR REPLAY: AI Tutors in K-12 Education
🗳 Poll: The biggest AI threats
♠️ ACE Framework part 3: Exploring with AI
🎙️ My AI sessions from the FETC / TCEA expo halls
📺 WEBINAR REPLAY: AI Tutors in K-12 Education
What is the role — and what will be the role — AI tutors in K-12 education?
In this webinar hosted by Toddle, I gave an overview of where we are today with AI tutors, shared what I like (and don’t like) about them, and talk about where it’s all headed.
The live webinar is complete, but you can go watch the whole replay right now — for FREE.
It’s hosted by Toddle, a comprehensive teaching and learning hub for schools. (You can learn more about Toddle in the webinar, but during my part, I’m mostly just talking about the role of AI tutors in K-12 education.)
🗳 Poll: The biggest AI threats
This week’s question: What's the biggest AI threat our students should be aware of?
🟨🟨⬜️⬜️⬜️⬜️ Data privacy (16)
🟩🟩🟩🟩🟩🟩 Inaccuracies / hallucinations (41)
🟨⬜️⬜️⬜️⬜️⬜️ Biases (14)
🟨🟨🟨⬜️⬜️⬜️ Barrier to critical thinking (22)
⬜️⬜️⬜️⬜️⬜️⬜️ Violation of intellectual property (2)
⬜️⬜️⬜️⬜️⬜️⬜️ Other (2)
Some of your responses:
Voted “Inaccuracies/hallucinations”: Unfortunately, there are many people in the world today that just read headlines or the first few lines of a story/article and don't delve any deeper. With AI being found at the top of most search engines and with people going to AI for answers more and more, they may be accepting fallacies as truths.
Voted “Biases”: If you're looking for a particular resource and don't recognize the bias that can eliminate an entire group of resources, it is a problem. A LOT of students and teachers don't think about that.
Voted “Data privacy”: I feel that if we teach students about the inaccuracies/hallucinations and how to find them, it will, in turn, help them develop their critical thinking skills. Still, if we can't keep our students' personal information safe then we shouldn't be using it in the first place.
Voted “Barrier to critical thinking”: If we don't want the machine learning to replace human intelligence than we can't have it do some of our thinking then just present or share that out without our own voice. If we aren't careful it can stunt our own critical thinking.
♠️ ACE Framework part 3: Exploring with AI

The “Exploration” section of the ACE Framework
AI literacy. It’s a new set of skills students will need to thrive in an AI future.
If you’re a teacher and you’re not up to speed on AI, this might feel overwhelming to you.
“I already have so much to teach.”
“I’m not techy. I don’t know how to code.”
“How will I teach students about technology I don’t even understand?”
That’s why I teamed with Holly Clark to create an AI literacy framework — for teachers AND for students — called the ACE Framework.
AWARENESS: What you need to know about AI technology
CRITIQUE: How to use it ethically, responsibly, safely
EXPLORATION: How to create and do more with AI
I’ve been unpacking the ACE Framework week by week here in the AI for Admins newsletter.

Today, we dive into part 3: Exploration.
Exploration is all about exploring with AI — using it to co-create, to augment our human abilities, to push us to a little higher level of learning that we would be without it … using it as a collaborator, a thought partner.
In essence — finding the sweet spot in the spectrum of the human doing the work and AI doing the work.

The “sweet spot” in the human/AI balance
If you’re reading this on your phone and the text is too small, the spectrum looks like this (yellow highlight is in the “sweet spot”):
Not taking full advantage of available tech
Adding to (but still maximizing) human ability
Co-creating (but still building human skill)
Reaching beyond human’s current ability/grasp
AI overreach
If we want AI to support teaching and learning, I think we need to find meaningful ways that it falls in the yellow sweet spot.
Unfortunately, the discussion of AI in the classroom often falls into three realms:
“Here are all the ways you COULD use AI in the classroom.”
“All AI is going to do is make us stupid if it does all our thinking.”
“Students just use AI to cheat.”
To me, all of those discussions are living on the fringes and aren’t helpful.
In the ACE Framework, AI “Exploration” is about collaborating with AI, co-creating, managing it in real-life situations, enhancing learning, and solving problems with AI assistance.
In essence: responsible, real-world ways to use AI to develop as humans so we’re our best selves and don’t become obsolete.
So … what does that look like? Let’s dive into the sweet spot, section by section …
Adding to (but still maximizing) human ability
What it is: This type of classwork still mostly relies on human ability but adds a layer of AI support. (In fact, this is probably the place that teachers new to AI would feel safest taking their first steps.)
Why this works: It’s still relying mostly on human thinking and skill. It teaches students how AI can support their work in small ways. Students get feedback on their own work in small increments — which they can actually learn from if they’re paying attention. It also teaches them about the nature of AI and where they might and might not want to use it.
How it might look:
AI-generated feedback (i.e. writing comments from Brisk, AI whiteboard feedback from Snorkl)
AI chatbots that provide help in the moment it’s needed
Generating big-picture ideas that students will act on (i.e. writing topics)
Things to avoid:
Only providing AI feedback (teachers and students need to feel that they’re both carrying a similar load in the learning journey)
When AI chatbots do TOO much of the work
When AI suggests too many subtopics or details in the big-picture ideas
Co-creating (but still building human skill)
What it is: In this type of work, the student and an AI model are sharing an equal amount of the load in the completion of the task. When it’s designed well, the student is still doing significant thinking and developing skills — but is doing more (and progressing faster) than they would without the AI assistance.
Why this works: It’s like the calculator analogy we’ve heard so much in our discussions about AI in education. The calculator does manual calculations that don’t lead to deeper student understanding (but speed up the process). However, you can’t outsource thinking to a calculator the same way you can with an AI large language model, so using AI in this way isn’t as much of a no-brainer as it is with a calculator. (Example: some math teachers will tell you that the manual calculations help students better understand the big-picture math concepts they’re learning. It should be the same with AI.)
How it might look:
Creating a video for an assignment where student and AI equally share content creation and video editing (like Canva Magic Media)
Making a slide deck where AI suggests an outline but student customizes the content and adjusts for their writer’s voice
Student writes an essay with a writing coach chatbot (like SchoolAI Spaces, Brisk Boost, MagicSchool MagicStudent) giving advice along the way
Things to avoid:
Using AI to provide foundational understanding of new material (because of possible hallucinations; use vetted learning resources)
When AI eliminates struggles that lead to critical thinking, problem solving, and skill development (adjust lesson for less AI support)
When the AI chatbot gives too complex or too basic of support (adjust chatbot settings / instructions to suit student needs)
Reaching beyond human’s current ability/grasp
What it is: This is where we’re using AI to do more than what they can do … pushing them outside of their comfort level but not so far that they’re completely overwhelmed. In this case, AI is doing a good amount of the heavy lifting, but when the student is along for the journey, they’re learning by being stretched beyond their current limits. (Think of stretching a rubber band to make it more pliable.)
Why this works: This reaches back to Vygotsky’s zone of proximal development and linguist Stephen Krashen’s “i+1” theory.
If students learn at their current level of understanding, they don’t grow as quickly. (Call that “i+0” in Krashen’s theory … where the student is but pushed 0 levels higher.)
If we push them too far too fast, they’ll become overwhelmed and shut down. (Call that “i+2” or “i+7” or “i+2,164” … too much that it’s not helpful.
If they work to understand a little above where they already are — the zone of proximal development or “i+1” — they’re being challenged in just the right spot. I’ve heard this called the “Goldilocks zone” — not too much, not too little, jusssssssst riiiiiiiiight.
How it might look:
Having an AI model do a sophisticated, complex assignment as an example — then have students analyze it
Having an AI chatbot discuss a student’s work to suggest new ways to take it to another level, to provide dissenting and contrarian views, to poke holes and suggest improvements
Using AI to create exemplars that students can edit and modify (like my Frankenbot scaffolded writing activity template)
Things to avoid:
Overwhelm. When a student feels that they’re powerless, it’s game over — for the activity and, worse, for their identity as a student (if they feel this way over and over)
AI overreach. When you get the sense that the student isn’t getting anything from the exercise, it’s time to revisit.
Loss of humanity. Broadly speaking, if it “feels like” the human student isn’t validated and prioritized in the activity, it’s time to revisit, revise, or even scrap the whole idea.
Implementing the ACE Framework
Holly and I are still developing this whole idea — and ways to support students and teachers as they grow in these areas. But generally, here are some things I like about implementing it …
It fits into existing curriculum and lesson plans. We aren’t teaching about AI or computer science. We’re reflecting on our use of AI as a part of regular classroom learning. Done this way, any teacher of any grade level or content area can inject some AI literacy into many subjects and lessons.
It prepares students for the future. When they understand the nature of AI, the parts of AI where they should be cautious, and how it fits in their work, they’re better positioned to thrive no matter where the technology goes.
It doesn’t rely on teaching students technology that may become obsolete. We’re not teaching them how to use ChatGPT or how to how to create today’s AI systems. We’re developing them as people — and preparing them to think critically about using AI no matter where the technology goes.
What resonates most with you about the ACE Framework — or any of the points I’ve written about it? What would you add to it? What feedback do you have for it? Please hit reply to this email and let me know!
🎙️ My AI sessions from the FETC / TCEA expo halls

Presenting in the expo hall at TCEA in Austin, Texas
When I go to edtech conferences, I often do presentations in booths of edtech companies I really, really like. And these days, lots of them have to do with AI features — or products built on AI.
You can get all of my TCEA conference resources and FETC conference resources.
Here are some of my recent favorites — with full session video …
I hope you enjoy these resources — and I hope they support you in your work!
Please always feel free to share what’s working for you — or how we can improve this community.
Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]