🤖 AI frameworks to guide our practice

Getting the communication right regarding AI

As a former journalist, I find clear communication so very very valuable … and very rare.

When someone explains something very clearly AND succinctly, it’s impressive.

(Example: Two weeks ago at my church, my pastor described a holiday gift campaign as “ghost shopping” — you’re buying a gift for someone to give away as themselves, not a gift from you. It was PERFECT.)

When we explain complex and provocative topics clearly — like AI — it can help others understand. It can get them on board. And it can empower them to action.

I’m still trying to get my AI verbiage right.

This week, I found some new frameworks (messages? conceptual models?) that explained our relationship with AI better than I have had before.

Maybe these will be helpful to you, too … and if you have something that works well for you, please hit “reply” and tell me what it is!

In this week’s newsletter:

  • 🗳 Poll: “Original work”

  • 💡 AI frameworks to guide our practice

  • 📖 AI guidance from a district in Oregon

  • 📚 New AI resources this week

🗳 Poll: “Original work”

This week’s question: How concerned are you about students developing unhealthy relationships with AI?

I anticipated the general results … but I was in it more for the comments you would leave about this issue. Here’s the problem. I failed to turn on the “accept comments” option for this poll!

🤦‍♂️🤦‍♂️🤦‍♂️

Here are the results anyway …

🟨🟨🟨🟨⬜️⬜️ 5 (very concerned) (30)
🟩🟩🟩🟩🟩🟩 4 (concerned) (39)
🟨🟨🟨🟨⬜️⬜️ 3 (somewhat concerned) (27)
🟨⬜️⬜️⬜️⬜️⬜️ 2 (not very concerned) (8)
⬜️⬜️⬜️⬜️⬜️⬜️ 1 (not at all concerned) (0)

🗳 This week’s poll

This one isn’t related to today’s main article below, but I’m very curious about your responses. (And yes, I’m turning the ability to comment ON!)

Instructions:

  1. Please vote on this week’s poll. It just takes a click!

  2. Optional: Explain your vote / provide context / add details in a comment afterward.

  3. Optional: Include your name in your comment so I can credit you if I use your response.

How do you define "original work" in the age of AI?

Optional: Explain your response in a comment afterward.

Login or Subscribe to participate in polls.

💡 AI frameworks to guide our practice

Image created with Microsoft Designer

I was asked to speak at an event at Indiana Wesleyan University on Monday.

(Added bonus: My daughter, a high school junior, thinks she might like to attend there, so she and my wife came along and we turned it into a campus visit. Win win!)

They’re wrapping up a really cool grant-funded program at IWU …

  • Faculty get to try new AI-powered apps and teaching practices over three months.

  • They also get to enroll in a training courses or read a book to learn about AI.

  • In the end, they shared what they had learned in a face-to-face “AI Showcase.”

Kudos to David Swisher and Rick Bartlett for dreaming it up and executing it.

The whole thing was supported by this very simple — yet very effective — three-part framework (below) that I think anyone could emulate.

I share this as a framework (maybe a conceptual model? not sure exactly what to call these). I share in hopes that it — and the ones that follow — might be helpful to you.

If you’re seeking the words to use to help explain how AI should fit in our lives, you’re not alone — I am, too!

I think these simple frameworks / statements / models can clarify situations and empower our colleagues to take action (instead of fearing mistakes).

Try. Learn. Share.

TRY: This is a very hands-on step. It involves action. And, as I stated in my opening remarks for the showcase, it’s a step lots of educators don’t want to take (whether they’ll say it out loud or not). That’s because there’s uncertainty. And if they take steps to implement what they tried, there’s a chance for failure. In reality, though, ANYONE can try … and seemingly small steps can be the impetus for something big.

LEARN: This can be, admittedly, a more passive step. Read a book. Take a course. It feels safer because, in essence, we’re consuming instead of acting. But it also equips the educator in different ways. It provides information that can guide the trying … and can help them better understand the technology they’re using.

SHARE: We’ve had some consumption (learning). We’ve had some action (trying). Now it’s time to get others involved. In the showcase I visited, faculty did short demos — 10-15 minutes — to share what they had done and learned. It was fast-paced, and the short presentation was fairly low-stakes. But it positioned the presenter in a new light — not just a dabbler, but a conveyer of new ideas. A source of inspiration for others. Whenever this happens — even in very small ways — it can completely shift the mindset AND the culture of those involved.

The thing I love about all of this? It’s all predicated on messy first steps. There’s no expectation of perfection. We’re all just sharing what we did — not that we’ve solved a problem or created a brand new innovation.

Try. Share. Learn. I really, really like it.

Enhance, don’t replace.

In the courageous spirit of those Indiana Wesleyan Wildcats I met this week, I’m going to “try learn share,” too.

I’ve been looking for a simple statement that can undergird much of my belief about the place that AI has in our lives, in our work, and our learning.

I started with this (thanks to a conversation with Andrew Nikola and Dan Roberto from Wappingers Falls, N.Y.) …

Enhance, don’t replace.

I liked it as a good, simple starting point. We don’t want to fully replace our human thinking, our development of skills, our reasoning.

But if we can level it up, that can help us to be an improved version of ourselves. This idea of an enhanced version of ourselves isn’t anything new. We already do this by googling, by looking things up in books, by calling friends, by listening to music for inspiration. Now, we just have new ways to do it.

Enhance, don’t replace. It was good. But it still seemed incomplete.

Because, let’s be honest. Sometimes, we DO want to replace.

Mundane emails? Repetitive tasks? Something that is so out of sync with our “why” that it’s not a good use of our time?

Replace, replace, replace.

We just have to be intentional about it. And what we do is more nuanced, I believe, than this simple statement.

So I started adding to it.

Replace, preserve, augment.

Replace the trivial: Our AI tools can be great assistants to delegate tasks that we don’t want to give our human attention. We only have so much of our humanity to go around. We can’t give it to everything all the time. If it’s trivial — and it must be done — AI can replace it.

Of course, there’s a very tricky part about all of this …

Judgment calls. What is “trivial”? And what if someone disagrees?

Example: We don’t see assignments we give students as trivial. But they might. What if they choose “replace the trivial” with a task we classify as “preserve the sacred”?

Another example: Your boss doesn’t see a segment of the school improvement plan as trivial. But you might. What if you choose to “replace the trivial” by outsourcing the thinking on the improvement plan to AI?

See … these judgment calls aren’t just “lazy students trying to avoid work.” It’s human nature. And it’s all based on what we judge as worthy and unworthy.

Preserve the sacred: We can use AI assistants to speed up lots of tasks. To do work that we don’t want to do — or don’t have time to do. But what happens when it’s work that’s important? That impacts and improves the lives of those that matter to us? Even if AI can do it faster, sometimes we need to preserve those sacred things and do them ourselves. (Or sometimes, AI isn’t nearly as good as we are and we just shouldn’t replace it with AI.)

Example: Feedback on student writing. Lots of tools can create it for us. But is it any good?

Or, more important question … what happens when we don’t read our students’ work and have no idea how capable they are — or how much they’ve learned?

In these cases, we need to “keep a human in the loop,” as they say in the computer science world. Our attention and human touch really matter in these cases.

Unfortunately, we can’t preserve EVERYTHING as sacred. (I mean, this has been the struggle of teachers for decades … maybe centuries.)

So that leads to the last one …

Augment the rest: I’m still trying to figure out if “the rest” is overkill or if it’s right. If it’s not trivial and it’s not sacred, maybe AI can help us augment our work.

Because it’s usually not an all-or-nothing scenario, is it? Rarely do we choose between “all AI vs no AI”. There are all sorts of variations on this one.

Example from earlier: The student who didn’t see the relevance in the assignment. Can that student use AI to get a start on that assignment? Or to support the work in it? Maybe. But it’s easy to cross the line from “this is helping me learn” to “I’m not learning and just getting this done faster.”

This is one of those foundational AI literacies that sounds like “workshop presenter speak” but is really important …

Is it helping me accomplish my goals? What do I want to get out of this?

Another example from earlier: The school improvement plan. Maybe we can “augment the rest.” But we have to be careful. An AI assistant can give you ideas and a first draft, but we should be careful that it’s not making all of the decisions for us.

When do we cross from one to the other? Sometimes, it’s hard to tell … kind of like when you’re walking in the mist and suddenly you realize you’re wet.

Keeping a firm grasp on what you want to get out of the task can help you to see.

Replace the trivial. Preserve the sacred. Augment the rest.

I had a couple of other ones to share here, but …

… I’ll just stick them in my pocket for another newsletter. 😉😉

(Or if you’re dying to know, hit reply and ask and I’ll tell you!)

📖 AI guidance from a district in Oregon

The PDF guidance document includes:

  • responsible use of AI

  • navigating unauthorized use

  • AI differentiated instruction

  • prompt engineering guidance

  • suggested AI tools

  • … and more

It also includes the popular stoplight graphic — and a decision tree for educators.

Thanks, Scarlett, for sharing this!

📚 New AI resources this week

1️⃣ On students sharing chat links with assignments (via Michelle Kassorla, Ph.D on LinkedIn): I have such conflicted feelings about this post. Michelle shares how her students must use certain AI chatbots and provide a link to the transcript for the teacher. Transparency is good … but overreach is not. I like trying things to figure out what works, but I’m pretty skeptical about this one.

2️⃣ Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration (via U.S. Office of EdTech): This document, following up on a previous report of insights and recommendations, provides resources for safe, ethical, and equitable AI integration.

3️⃣ The Value of Doing Things (via Jason Gulya): Jason writes about how important it is that humans stay in the loop and avoid having AI do too much for them.

I hope you enjoy these resources — and I hope they support you in your work!

Please always feel free to share what’s working for you — or how we can improve this community.

Matt Miller
Host, AI for Admins
Educator, Author, Speaker, Podcaster
[email protected]