🤖 AI literacies, part 2: Critique

Key question: You can ... but should you?

Did you miss me?

Last week, I was at the TCEA Conference in Austin, Texas, sharing about practical ways to use edtech and AI to engage students and level up learning.

FYI: You can get all of my TCEA resources — slide decks, full session video (!) and more — on my TCEA resources page.

In our last newsletter, we started unpacking the ACE Framework — an AI literacies framework I’ve developed with fellow author/speaker Holly Clark.

  • Awareness

  • Critique

  • Exploration

In today’s newsletter, we’ll dive into Critique — how we can help students judge and evaluate AI and its outputs to be savvy about their AI use (or decisions not to use AI).

In this week’s newsletter:

  • 🚨 Sale on my book, AI for Educators

  • 🗳 Poll: The biggest AI threat

  • ♠️ ACE Framework part 2: Critique of AI

  • ✊🏻 My new AI protest: capitalizatioN typoS

🚨 Sale on my book, AI for Educators

The future is changing rapidly. AI is going to be a big part of it.

How do we prepare students for that future?

My book, AI for Educators, has a whole chapter about preparing students for an AI-integrated world — steps we can take now to start preparing students for their future.

⭐️⭐️⭐️⭐️⭐️ Chock full of ideas, it is a great resource for educators at all levels who are curious about AI and it’s potential impact on teaching and learning. Highly recommend! — Dr. Todd Schmidt via Amazon

Right now, Amazon is displaying (for me in the U.S.) an 18% off discount on the paperback …

Regular price $24.95 USD … sale price $20.41 USD!

🗳 Poll: The biggest AI threat

This week’s question: What do students/teachers need to understand about AI the most?

🟨🟨🟨⬜️⬜️⬜️ Where it gets its data (14)
🟩🟩🟩🟩🟩🟩 AI isn't the same as a search engine (25)
🟨⬜️⬜️⬜️⬜️⬜️ Not all AI tools are alike (8)
🟨🟨🟨⬜️⬜️⬜️ AI is everywhere (14)
🟨⬜️⬜️⬜️⬜️⬜️ Other ... (7)

Some of your responses:

  • Voted “AI isn’t the same as a search engine”: “Especially since Google now puts AI results at the top of a search, students and teachers need to realize that the AI results may or may not be accurate and unbiased.”

  • Voted “Where it gets its data”: There's a reason why many Ai user agreements need users to be 18. Teachers shouldn't be using those programs with students and should stick with Ai that dedicates a subsection for student users that doesn't mine those particular users for data.

  • Voted “AI is everywhere”: AI is everywhere and pretending it doesn't exist or that students don't know about it/how to use it is unproductive. It is out in the world for good or for ill.

  • Voted “Other”: “I think it is important that they know that generative AI is not generating brand new creative content, but it is just repurposing what already exists based on patterns.”

🗳 This week’s poll

Based on this week’s section of the ACE Framework — Critique — let’s do a little critiquing of our own!

Instructions:

  1. Please vote on this week’s poll. It just takes a click!

  2. Optional: Explain your vote / provide context / add details in a comment afterward.

  3. Optional: Include your name in your comment so I can credit you if I use your response.

What's the biggest AI threat our students should be aware of?

Login or Subscribe to participate in polls.

♠️ ACE Framework part 2: Critique of AI

The “Critique” section of the ACE Framework

Watch any sort of dystopian TV show or movie about artificial intelligence and you’ll inevitably hear a line like this …

“They were so busy seeing if they COULD … they never asked whether they SHOULD …”

And that’s at the heart of our relationship with AI. There’s so, so much you CAN do with AI. But should you?

Example: I wrote about Google’s failed messaging around its AI assistant, Gemini, during the Olympics. In an ad, it depicted a father using AI to draft a fan letter from his daughter to the Olympian she idolized.

AI CAN write that draft for your daughter. But SHOULD it? Or should you encourage her to write it on her own — from her human heart, showing why she personally adores her favorite athlete — instead of letting an AI model guess?

For us — and for our students in the future generation — there will be lots of AI-related questions where the answer is, “No, we SHOULD NOT do this.”

That’s the heart of the “Critique” section of the ACE Framework, a three-part AI literacy framework I created alongside fellow author/speaker Holly Clark. I’m unpacking the framework week-by-week. Here’s part 1: Awareness.

As adults, we need to critique AI — ask critical questions about it — all the time to determine whether we SHOULD even though we CAN.

And our students will need to develop this skill as they navigate life and head toward adulthood and the workforce.

Here are some considerations regarding the “Critique” section of the ACE Framework:

1. AI is not neutral.

What to know: It’s understandable to think that AI is neutral. It’s trained on tons of data. So, we may think, it’s not on the fringes of any political beliefs. It’s not trying to spin the facts or sway you or rely on any wild, controversial information.

Some of that may be the case, but don’t let that distract you from reality …

AI is not neutral.

Because, in human life, there really is no such thing as “neutral.” There’s always a leaning. Always a bias. Always a shade of reality one way or another — a shade that’s closer to (or farther from) your own beliefs.

Sometimes, AI models are outright trained not to take sides on controversial issues. Example below: Google’s Gemini wouldn’t weigh in on which U.S. political party was best.

Google Gemini declined to discuss political parties.

Gemini’s AI model has an answer to that question — a statistical best guess based on its training and dataset. The developers at Google have just programmed it not to disclose that answer in the best interests of the company.

AI models are based heavily on statistical majorities, predicting what’s likely to be the best answer and providing it to you. But all of that statistical calculating is based on human data that includes human bias. And more information about one side. And a lack of data on certain things.

All of this causes AI not to be neutral. It extends beyond politics and gender and race and other hot-button topics. Some examples:

  • Coffee roasts (dark roasts might be more common in AI training data)

  • Book genres (thrillers and romances sell more than poetry and historical fiction)

  • Football offenses (the pass-heavy NFL might influence an AI model’s suggestions)

  • Vacation destinations (the popularity of beaches could overshadow other spots)

AI is not neutral.

How it shows up in schools: Ask a social studies teacher or a literature teacher and they’ll tell you that curriculum isn’t neutral either. (Especially in political wars in the United States right now.)

No matter what we teach … no matter what students learn … no matter what topics are covered … AI is going to make judgment calls. Content decisions. Omissions and inclusions of details and important points. And it’s all going to be based on the data in the dataset and how the model has been trained.

Critiquing the responses that AI gives us is a crucial skill. It’s one that our students will need to be their best human selves in the future. And it’s one we need as educators to

We can ask all sorts of questions to uncover the underlying assumptions, judgments and decisions made by AI models.

We can also serve as the “human in the loop,” constantly observing and adjusting and making our own decisions about how AI operates.

Questions to ask:

  • What are all of the possible viewpoints on this?

  • What judgments did AI make in its response?

  • What isn’t being considered? Who isn’t being considered?

  • If we had prompted AI differently, how would the result be different?

  • What is missing from this response — and what impact does it have?

2. Constantly evaluate AI systems and outputs.

What to know: Our critique of AI shouldn’t just stop with the response that AI gives us when we prompt it.

That’s just one of the smallest datapoints we can analyze with AI — the response.

(Actually, we can always go smaller … down to the token — the building blocks of language that AI large language models understand and use. Tokens are words, parts of words, even punctuation marks.)

We can critique from a narrow perspective (how AI responds) all the way up to the broadest perspectives (the implications of AI on humanity and the future).

And we should.

Critiquing AI, from the narrow to the broad

Because if all we do is critique from a narrow perspective, we’re “lost in the weeds.” We see all the little details but we’re not thinking about the big picture. This can happen when our judgments don’t extend beyond AI in our personal lives and our day-to-day work.

And if all we do is think about the broad perspectives, we see a “30,000-foot view” of AI. We debate about AI in generalities but don’t get down to its applications in schools and in our world.

We need to be willing to zoom in AND zoom out.

How it shows up in schools: We can discuss AI-related (and non-AI-related) topics in classrooms and schools from a narrow perspective and a broad perspective.

Get in the weeds about AI — how it responded, why it was lacking, what it did really well.

But also take the 30,000-foot flyover — what this means for us as humans, what the world could like if it goes well or poorly.

When AI enters the conversation in class discussion, take a quick detour to discuss how it applies to classwork and life.

But even when we’re not talking about AI, having these far-ranging conversations (critiquing little details and big-picture ideas) prepares students for the future.

When they start to see that there are lots of sides to issues — how the little details affect the big picture, and vice versa — they’re equipped with the thinking skills they’ll need to thrive in an AI future.

Questions to ask:

  • What are the implications of that choice of words — or decision made — by the AI model?

  • What patterns, consistencies, omissions are we commonly seeing in a certain AI model — and how does it compare with others?

  • How does AI impact my life, my work, my schoolwork in positive or negative ways?

  • How is my use of AI perceived by others — my classmates, my teacher, my boss — and is that fair or unfair?

  • If unchecked, how will the effects of AI impact our lives, our work, our humanity?

3. Know the issues.

What to know: OK, total honesty here. I got going on the first two sections and ran out of time, space, word count, etc. on the rest of this. It just goes to show how broad and deep the idea of critiquing AI is!

Here are some other topics I wanted to cover here but will just summarize below:

  • Inaccuracy: AI models produce inaccurate information — sometimes because they don’t have enough data in their dataset to make a statistically significant, accurate answer.

  • Data privacy: AI models thrive on data. The more data they have, the higher statistical probability they’re able to provide the response you’re looking for. Your data is valuable — your personal data, but also the data you share with AI models.

  • Humanity: We bring a LOT to the table that AI never really will. We have to critique what happens when our human touch is removed from the equation. We also should consider how we’re developing valuable human skills when we do classwork — and what happens when we outsource that to AI.

  • Misinformation and deception: When AI large language models create text for us, it’s just a big creative writing exercise. It can write from any perspective and use persuasive tactics to manipulate humans. Bad actors can use AI, so we should look at its use with an ethical eye.

  • Environmental impact: The computers, technology and infrastructure that run powerful AI models gobble up tons of energy and resources. We should be aware of their energy consumption (power, water, etc.) and their effect on our environment.

  • Intellectual property: AI models are trained on tons of data — data that belongs to someone that was likely used without their permission. We have to consider the impact that has on people who have worked to create that intellectual property that now powers these powerful AI models.

How it shows up in schools: There are lots of ways we can incorporate this into teaching and learning …

  • Inaccuracy: Encourage students to fact-check AI responses. Tell them of the importance of developing your own foundational understanding so you’ll know something doesn’t look right and you can investigate.

  • Data privacy: Always ask, “Where is my data going? How is it being used? Who benefits?”

  • Humanity: Encourage students to reflect on where they’re going, what they’re trying to accomplish from school, and what skills they want to develop — and ask whether AI is helping them achieve that.

  • Misinformation and deception: Discuss when and how we should disclose our AI use — and how we would feel if someone manipulated us with AI.

  • Environmental impact: Bring up how much energy is used to do basic tasks with AI (i.e. prompting an LLM like ChatGPT, creating an AI image, etc.). Connect it to real-world implications (i.e. creating AI images is like turning on the water faucet and letting it run).

  • Intellectual property: Have discussions about the pro’s and con’s of using people’s intellectual property to train AI models and ask the question: Is it fair? Is it OK? Why or why not?

Next up: Exploration

Next week, we move on to the third part of the ACE Framework — Exploration. This is where the rubber meets the road and we look at how AI can support our learning, our work, our thinking … our life.

✊🏻 My new AI protest: capitalizatioN typoS

Yesterday, I wrote on LinkedIn about how frustrated I was that AI is standing in the way of our human connection … that people are confusing “content” and “engagement” with real human connection.

In essence, they’re using AI to replace the authenticity of sharing the human experience with other humans.

I see it everywhere. Email communication. Email newsletters. Blog posts. Even in automated social media replies.

My newest way of fighting back: capitalization typos.

A quick excerpt:

“So I started a silent protest, mostly in emails -- leaving little typos and extra capitalS in words. (THe shift key on my laptop is a little overambitious and it doesn't let go easily ... which, honestly, lots of us humans don't do easily either.)

When I leave them there, you have NO doubt that this message was drawn directly from my own bloodstream. It's a little gift I want to give you -- a true, authentic part of myself.”

I don’t want to be a CONTENT creator. I want to be a CONNECTION creator.

(And I’d love to connect with you on LinkedIn … send me a request!)

I hope you enjoy these resources — and I hope they support you in your work!

Please always feel free to share what’s working for you — or how we can improve this community.

Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]