Four Priorities for Human-Centered AI in Schools

Two years after the release of ChatGPT, schools are considering how to adopt long-term, sustainable strategies in response to the rise of generative AI. During this webinar featuring Eric Hudson, we discussed how schools can build enduring approaches to AI adoption that are mission-aligned and human-centered, rather than reactive and proscriptive.

In our first AI for Education webinar of 2025, we explored four essential priorities:

  • Augmentation over Automation

  • AI Literacy over Policy

  • Design over Technology

  • Vision over Decisions

We also examined the research and delved into use cases from innovative schools on how a school's approach to AI can be an asset to its program, message, and community. For resources and links related to this webinar, visit aiforeducation.io/four-priorities-for-human-centered-ai-in-schools

AI Summary Notes:

This webinar focused on promoting human-centered AI adoption in educational settings. Eric Hudson outlined a framework of four priorities: Augmentation over Automation, AI Literacy over Policy, Design over Technology, and Vision over Decisions, emphasizing the need for AI to enhance human capabilities rather than replace them. Discussions highlighted the importance of developing durable skills like critical thinking and adaptability for a future driven by AI, as well as addressing challenges such as faculty skepticism in higher education. Strategies for educators included hands-on practice with AI tools and fostering peer-based professional development.

🎙️ Introduction and Overview (00:02 - 02:25)

  • Amanda, CEO of AI for Education, opens the meeting, introduces herself and the organization.

  • Expresses excitement for participation.

  • Introduces Eric Hudson as a key speaker on AI adoption in education.

💡 Eric Hudson's Presentation Overview (02:25 - 10:44)

  • Eric Hudson introduces himself and his work focus on learner-centered pedagogy and AI in education.

  • Presents a framework with four priorities: Augmentation over Automation, Literacy over Policy, Design over Technology, and Vision over Decisions.

  • Emphasizes human-centered AI adoption in schools.

🏫 Eric Hudson's Framework Details (10:44 - 21:15)

  • Augmentation over Automation: Advocate for using AI as a tool to enhance human capabilities rather than replace them.

  • AI Literacy over Policy: Stresses value of understanding AI and its ethical use over restrictive policies.

  • Design over Technology: Encourage a shift from AI-resistant assessments to AI-enabled assessments.

  • Vision over Decisions: Recommend focusing on long-term goals over short-term AI-related decisions.

🔄 Discussion on Human-Centered AI in Education (21:15 - 31:39)

  • Amanda and Mandy discuss integrating these principles with the community approach.

  • Encourage adoption of AI literacy frameworks to replace traditional policies.

  • Suggest utilizing AI to rethink learning assessments and school structures.

📘 Skills and Future Readiness (31:39 - 40:45)

  • Emphasize the importance of durable skills such as critical thinking and adaptability in AI-driven future.

  • Discuss how AI literacy should be implemented differently for younger versus older students.

  • Express concern about the rapid pace of technological integration and potential impacts on education.

🌐 Challenges and Opportunities of AI Integration (40:45 - 50:04)

  • Address challenges in higher education regarding AI resistance and skepticism.

  • Stress the importance of engagement and literacy among faculty to overcome AI scepticism.

  • Discuss AI's potential to augment human skills rather than replace them.

🛠️ Strategies for Educators (50:04 - 59:26)

  • Providing hands-on practice and emphasizing exploration and playfulness with AI tools.

  • Encourage peer-based professional development and communal learning to build capacity.

  • Discuss collaborative engagement and decision-making for AI adoption within educational contexts.

  • Eric Hudson

    Eric Hudson is a facilitator and strategic advisor who specializes in learner-centered assessment, human-centered leadership, and strategic program design. Eric has designed and facilitated professional learning and strategic retreats for hundreds of schools and learning organizations. He spent a decade at Global Online Academy (GOA), first as an instructional coach and ultimately as Chief Program Officer, working with schools around the world to rethink where, when, and how learning happens. Prior to GOA, he spent 12 years in the classroom, where he taught English, Spanish, and journalism to middle school, high school, and college students. Working with students is how he developed his passion for designing empowering learning experiences.

    Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Mandy DePriest

    Mandy is a Curriculum & Content Developer at AI for Education. She has over 15 years of experience in public education, having served as a classroom teacher, library media specialist, and instructional coach. She has also taught education technology courses in higher education settings as well as professional development workshops for teachers on the transformative power of technology. She is committed to ensuring that students are prepared for the dynamic demands of the future by leveraging the power of technology-driven instruction.

  • 00:01
    Amanda Bickerstaff
    Hello, everybody. Well, we have over 300 people here today, which is really great. I will say it's also frigid here in New York City. I do have a heated blanket on, which is why I think zoom is so great, because I. You can't see that I have something in my lap. So we're excited to have you here. I'm Amanda. I'm the CEO and co founder of AI for Education. If it's your first webinar or you've been to a lot of them, we're happy to have you here. I'm incredibly fortunate and excited to be able to have Eric Hudson, who's been really on the forefront of thinking about how we can have AI adoption, really be human focused and human centered. Mandy from our team is here as well, and if we want to go to the next slide, you're already doing it.


    00:40

    Amanda Bickerstaff
    So thank everyone for saying hello in the chat. We want you to get involved. Here's a way to do it. Since there's so many of us today, the chat is really for you all to communicate, to share best practices and resources. We'll also be dropping in resources all along the way, so please use the chat for that. But if you have a specific question for Eric, myself or Mandy, please put that in the Q and A because that's where we're going to be really looking for those kind of major questions.


    01:04

    Amanda Bickerstaff
    As always, I'll pepper in those questions as we go, and then also at the end with a little bit of time, if we're not going to be prompting today, but if you do want to try out some stuff where you're really like something, please click on it and try it out and then also share those resources that you really enjoy. We love our community of practice that we've built. You all are amazing. You're worldwide. You, you're thoughtful and you have lots of great ideas and resources. So this is an opportunity to work together. So we appreciate you having us here. And I'm going to call Eric on the stage now. So, Eric, I feel very lucky to have been doing this work for the last year and a half. And Mandy and our team was like, we have to do a webinar with Eric.


    01:45

    Amanda Bickerstaff
    And it was really interesting because I kind of didn't place a name, but Eric and I had already just spent a day together with the Middle States association, you who have a responsible AI accreditation that they're working towards. And so Eric and I and a group of people are able to come together and think about what essentially what learning looks like with AI adoption. And I was so impressed by just how thoughtful and also just how our philosophies align. So, Eric, we're really excited to have you here today. We're going to hand it over to Eric. Just so you know. It's going to go. He's going to do about a 10 minute talk and then we're going to pull Mandy on stage and have a bit of a conversation.


    02:19

    Amanda Bickerstaff
    But I love Eric for you to start with just saying hello and introducing yourself and then we'll start your presentation.


    02:24

    Eric Hudson
    Hi, everybody. Thanks, Amanda. So happy to be here. Thanks for inviting me. Really glad to be part of a truly kind of worldwide conversation about AI in education. Everyone, my name is Eric Hudson. I'm an independent consultant. I work with schools and nonprofits on kind of learner centered pedagogy, leadership and strategic program design. And like Amanda and AI for Education, for the last couple of years, a lot of that work has been around artificial intelligence in education. And so what I'm going to work through today are just a framework that I've developed over the course of working with many schools on this topic. And I hope that it offers you some insight kind of in ways to think about how to approach generative AI in kind of a long term, sustainable, human centered way. And so we'll go to the next slide.


    03:17

    Eric Hudson
    Mandy, thank you. So this all came from the idea that schools are human centered organizations that they're made for and by people. And I don't believe that generative AI should change that. I think schools should remain very human centered organizations. And so when I work with schools, I try to get them to take that long view about what do we need to, how do we need to approach generative AI in a way that keeps the humanity of our teachers and our students and our communities at the center. And so I articulated these four priorities to help schools kind of think about different pathways towards doing that. The four priorities are augmentation over automation, literacy over policy, design over technology, and vision over decisions.


    04:08

    Eric Hudson
    And I articulated these four priorities in this way because in my experience, and I'd be super curious if the AI for Education team feels the same way. In my experience, schools tend to be very focused on the bottom words. They talk a lot about what AI is going to automate. They're really worried about policy. They're super focused on AI as something that's a technological issue and they feel kind of overwhelmed by the number of decisions that they feel they need to make. But I think there's a Lot of research and a lot of sort of useful ideas in the sense of a human centered approach actually prioritizes those top words. Of course the bottom words matter, but when it comes to big decisions about generative AI in our schools, we want to really focus on those top words.


    04:57

    Eric Hudson
    I'll walk you through what I mean by each priority really quickly and then we'll have a conversation about them. We can go to the next slide. Augmentation Over Automation comes from an essay from Eric Brynjolfsson at Stanford University. It's called the Turing Trap. It actually came out before ChatGPT, so it's about three years old. And Brynjolfelson makes this argument that when we talk about artificial intelligence, we tend to be very focused on automation. That's that dark green circle on the left side of your screen. There's a set of tasks that humans can do, and there's a subset of tasks that AI is and will be able to fully automate. We can just delegate that stuff to AI. But Bernjelson says if we're really interested in a human centered future with AI, we have to focus on augmentation.


    05:50

    Eric Hudson
    That's a much broader, wider, more interesting realm where human beings use AI to do things that human beings weren't capable of before, that we exert our agency over AI, use our creativity, our empathy, our sense of what's right, and leverage AI as a tool to kind of extend or create new capabilities that weren't possible before. And, and I think we can talk about this afterwards, but there's kind of lots of ways to think about this in a school context. But the main way I think about it in a school context is what do our colleagues and our students need to know and be able to do in order to take an augmentation forward approach to generative AI rather than an automation forward approach.


    06:39

    Eric Hudson
    So that's priority one, and that kind of leads us into priority two on the next slide, which is AI literacy over AI policy. I know that AI for Education has their own framework. There's lots of really good ones out there. I like Digital Promises framework. But basically the idea here is that generative AI is such a powerful and rapidly evolving technology that to try to articulate policy which is really designed to control or constrain behavior is probably not the right approach when you're thinking long term. Instead, a much more durable, transferable skill that we can give our colleagues and our students is literacy. And Digital Promise defines AI literacy in three ways. We have to understand generative AI, meaning understand both how it works the ethical considerations when using it, as well as what its flaws are.


    07:34

    Eric Hudson
    We have to be able to evaluate AI output. We have to be able to evaluate AI tools for their efficacy and their usefulness. And we have to use generative AI tools. One of the best ways to build literacy is to use them to build our skills in working with these tools. And by combining these three elements, understand, use, evaluate, we're building sort of a long term, durable approach to being able to kind of adjust as this technology evolves. You know, AI is going to look really different a year from now, five years from now. What do we need? We need to be literate so that we can onboard those developments in a kind of thoughtful and ethical way.


    08:16

    Eric Hudson
    And I think schools articulating a policy is important, but how can we use our policy as a launching pad for really investing in the literacy of both adults and students in our community? Let's do the next one. The next one is design over technology. This is the idea that even though generative AI is obviously a technology, our response to generative AI is not a technology response, or it shouldn't be a technology response, it's a design response. And I really like the work of Mahabali. I know that Amanda and team really like her as well. She's at the American University in Cairo, and she says that educators have four options. In an AI age, you can make AI use impossible. And the only way to do that is to make your assessments kind of fully supervised.


    09:10

    Eric Hudson
    You can discourage AI use by redesigning assessments to forms that AI would not perform well on. You can allow AI use within boundaries, or you can allow indiscriminate use, but you have to choose. Generative AI is a disruptive technology and we do need to make a choice about how we want to approach it. And I hope you'll notice that 1 and 2 are not about AI at all. It's about kind of shifting maybe the environment or the mode or the pedagogy behind our assessments. And when I work, when I do trainings with teachers and we do sort of design exercises around assessment, I tell them that the real work ahead of them are 2 and 3.


    09:53

    Eric Hudson
    Either you're going to need to redesign your assessments so that they appeal to students or work with students in a way that discourages AI use, or you're going to have to learn how to integrate AI in a thoughtful, responsible way into assessments so that students learn how to use the technology productively and effectively. And it's this idea that 2 and 3 are really about design. And not necessarily about AI that makes this priority really important to me. Okay. Okay, great. Let's do the last one. So the last priority is vision over decisions. You know, when I work with school leaders, a lot of them feel very overwhelmed by decisions they need to make. Which tool should we buy? What should I do about the student who cheated? Should we allow faculty or teachers to write college recommendations using generative AI?


    10:44

    Eric Hudson
    And it just feels like this kind of overflowing tidal wave of decisions. But if you make decisions without a vision, all a decision does is move you to the next decision. And so I really encourage school leaders especially to get clear about their vision. And I really like this kind of articulation of the difference between vision and mission. A vision is a future objective. What are we working towards? What do we want our schools or our organizations? What kind of impact do we want them to have on the world 5, 10, 50 years from now? What do we hope to accomplish someday? Then? How might generative AI support us in working towards that vision? What role can it play in supporting our vision so that we're not making decisions about generative AI, we're making decisions about how generative AI affects our vision. Vision.


    11:42

    Eric Hudson
    And our mission as schools and organizations is to try to live out and work towards that vision every day. And so when we make those day to day decisions about generative AI that feel relentless and overwhelming, how are we making sure that those decisions are really aligned to both our mission and our vision and that we're working towards those things all the time? And so that's kind of a really quick overview of all the different. Of the different priorities. And I'm looking forward to talking more about it. Amanda, Mandy, others, if you kind of want me to go deep and give specifics, I'm happy to, but I'm looking forward to talking more.


    12:21

    Amanda Bickerstaff
    Yeah, no, this is great. Maybe let's go back to the slide. The very first slide. Mandy, let's go back there and use that as a kind of foundational kind of place for us to go. And Mandy, come on screen too. And so I. First of all, everyone in the chat, we love it. Please continue to have the conversation. But one more. Mandy, let's go to the fourth. And so I think that some things that really, like, kind of, you know, were interesting in the chat, but also for us is that we kind of actually see augmentation not even as potential goal, but like empowerment is even a step further. Like what has just never been possible before, and you know, what that could really mean. And so I Think that there's even a bigger goal potentially as a technology develops.


    13:05

    Amanda Bickerstaff
    I think also, I mean, Manny can talk about this, but, like, we don't do policy writing. We do guidance writing, and we do guidance writing. And in every set of guidance we write, AI literacy is a core component. So, Mandy, you want to talk a little bit why we kind of avoid that idea of policy?


    13:20

    Mandy DePriest
    Yeah, we just kind of think that policy is a much more prescriptive and formal regulation around what is a really fluid and developing technology. And so it can be difficult to go back and revise formal policy that's gone through like board approval and things like that in response to advances in the field. And so we generally try to steer people towards the language of like, guidance or guidelines, because that can be more like, agile and responsive to developments as they come along.


    13:53

    Amanda Bickerstaff
    Yeah, go ahead.


    13:54

    Eric Hudson
    I totally agree. I mean, I think the challenge with policy is that, like you said, it's kind of very bureaucratic, but also it's about control. And I don't think that's kind of the right approach when we're talking about kind of working with people and using this technology.


    14:10

    Amanda Bickerstaff
    I mean. Oh, my gosh. Yeah, I mean, and someone actually asked in the chat around, like, student use. And I know that, like, at this stage, you know, that's a big question, but the idea of like, banning or controlling these tools or like creating like this kind of very punitive system is going to be very difficult because I think that what we don't recognize is just how ubiquitous is these tools are going to be like generative AI we talk a lot about is more like the Internet or electricity than it is like an app or a device. It's going to be the underlying power behind things you don't even realize are generative AI. And so the idea that you would.


    14:46

    Amanda Bickerstaff
    Would have a set of policies that was really about a locus of control is going to be very difficult for that to be a actionable, but also like shelf stable. It's just. And I think that's where the literacy component comes in. Because if you focus on literacy, those are mindsets, those are foundational skills. Those are trying things out and seeing what works or doesn't work and actually having meaningful conversations with teachers and students about appropriate use. And I think that goes to the design with Maha. There was a kind of a conversation about the difference between AI resistant assessments and then AI enabled assessments. But I think realistically, like, we have to redesign like, like school. I mean, I think that this is a big part and the human centered approach.


    15:30

    Amanda Bickerstaff
    I think we've always wanted school to be more human centered, and it's really hard to do. I don't know, Mandy. I feel like Mandy and Eric probably agree with me. Like, we would have loved if school could have been more human centered and more student centered and the relationships. But all the systems in place have been pretty difficult to do that. And I think that this is both a forcing mechanism and an opportunity to have that, like actually redesign the way we assess students.


    15:55

    Mandy DePriest
    Well, humans are scary and unreliable, right? So if we center them and we allow them kind of to operate free form in this kind of amorphous space, that feels like relinquishing control. And so if we have clear policies and we have, you know, clear hierarchies and bureaucracies and things regulating things, that can feel like a safer space. But I think it ends up being kind of self defeating because then, like I said, you're not agile enough to respond. You end up in like a surveillance relationship or where you're constantly trying to enforce the policy and monitor and things like that, instead of leaning into the positive innovations and transformations that are available with this technology.


    16:35

    Eric Hudson
    I mean, that's kind of like the first principle of student centered education is you take an assets based approach, not a deficit based approach. Right. And I think that you have to do that with kind of generative AI too, if you are interested in building literacy in your community. You cannot only live in the land of the don'ts. You have to live in the land of the do's also. And you have to model some of those affirmative actions with students. I mean, that's why I really. I hope that every teacher can really think about one assessment they do over the course of a school year that they can make AI assisted or AI enabled, to use your term, Amanda.


    17:14

    Eric Hudson
    Because I think we do need to start thinking about how we're going to expose students to this technology in a productive, ethical way, even if a lot of our other assessments are designed to be more AI resistant. But either pathway you take, you're kind of beginning from this place of who are my students, what do they need? And how do I engage them really actively in the process so that it's either really hard or more importantly, just not appealing to them. To delegate this work to a chatbot.


    17:47

    Amanda Bickerstaff
    Absolutely.


    17:47

    Mandy DePriest
    And I'll build on what you said, Eric, just then, about everyone choosing like just one assignment that you can think of how to AI empower. Because there is space here, right? You don't have to go whole hog into Everything's all AI all the time. Like you can still keep the assignments that you've always done. You don't have totally throw out all of your practice, but you can just look for one or two small controlled instances where you can introduce it in a careful and intentional way and then maybe look at making some of the others less AI vulnerable and a more like traditional sense that don't incorporate the technology. Like there's agency. Here is what I'm trying to say. Like you have choice. Not everything has to be AI enhanced.


    18:25

    Mandy DePriest
    But just creating that space on an assignment by assignment basis I think is a really helpful mindset.


    18:31

    Eric Hudson
    Totally, totally.


    18:33

    Amanda Bickerstaff
    I'm going to like though give a caveat when we say like one assessment we're talking about if you're thinking about generative AI in spaces in which students are developmentally at the age to be able to use these tools and also within the terms of service. So I just want to always make sure that we're not telling you that if you have a seven year old you're going to be like giving them chatgpt. But we are saying is that when, especially for those students that are like, if you're teaching or working with like high school and or college age, community college, vocational, like that's a space where I think the redesign becomes very important.


    19:07

    Amanda Bickerstaff
    Whereas the literacy work that you're doing in the modeling work is much more important in the kind of K to 6, K to 7 realm where you can do modeling and can do that in and you can have opportunities for students that need extra support to potentially use the tools with you in an agentic way. Mandy so like we talk about like the idea if there's a really difficult thing, have a notebook LM set up that you are guiding students through or you're creating a podcast or something along those lines. So I do think that we always just need to think about safety here and also find that balance. So the last thing I say, let's come off share so we can see everybody.


    19:38

    Amanda Bickerstaff
    But the idea of vision I think is really important because one of the most, one of the things that we always do when we work with district partners or school partners is just be like, what are you already doing that generative AI can help you with? And then what are the things that you've always wanted to do but haven't had the resources, the time, the support. And I think that's where I think the idea of vision becomes very positive. Because I think if you think that generative AI is your end State again, it's not like the Internet was never an end state. The Internet was an enabler and sometimes a blocker. It was an empowering eighth hole. And it also was something that was potentially harmful. That's what we want you to think about.


    20:18

    Amanda Bickerstaff
    So when Eric pointed out this idea of, the idea of that vision, we want that vision to not be like, generative AI is going to save or be whatever. But it's like, what are we already doing? Okay, we have a big focus on future ready students. Then how are we using generative AI in meaningful ways to get students more ready for the future that is fast coming. And I'm going to tell you right now, even just that slight reframing is going to make this so much easier for people to understand what you're trying to do and not feel like you're just the AI is going to take over. But now, okay, where is that human centered approach? Where is that approach where the vision itself is the key and the AI is the enabler?


    20:57

    Eric Hudson
    Yeah, and I, you know, I think we keep coming back in all these comments to this idea of agency and reminding ourselves that we have agency over this technology. And that visioning process you described sort of really tries to get people to remember like, oh, we have our own goals, we have our own purposes as institutions. It's almost like understanding by design for an institution. Like, you start with your goal, you start with a vision, and then you work backwards from there. And generative AI is just one of many different tools that you can leverage to help you work towards that vision.


    21:30

    Eric Hudson
    And the other thing I would add in here is when I work with schools on visioning, when I work with teachers on sort of purpose finding as it relates to technology, they are more motivated to learn the tool and experiment with it because they start to see how, oh, this is something I can use for my own purposes. There's not one way to use this kind of. The thing that is amazing about generative AI is it's accessible, it's powerful, and it's flexible. Right. And we don't need to be master coders or we don't even really need to be prompt engineers anymore to use these tools. Well, it's more about coming in with clarity about purpose and clarity about what you need and clarity about the questions you want to ask. Then you can really kind of find your own way through it.


    22:16

    Mandy DePriest
    Well, and I'll add to Eric that I think that's really valuable for students to see different approaches and degrees of integration. Like, it's not Just here's AI. You have to know it so you can have a job. It's like we can make choices about how we're going to use this. We might use it in one area, but not in another. And here's why. I think we owe it to them to model this and have these conversations, because this is what we have wrought and that we are sending them off into.


    22:43

    Amanda Bickerstaff
    Absolutely. I just want to agree, but I'm going to make one point. I think the prompt engineering still does matter, but maybe not in the, like, the big P, big E prompt engineering, but more in the sense of like, you know, there was just a piece of paper that came out that was like, you know, even the. If you use very basic prompting and even the best model, you're going to get very basic answers. It's only when you use better questioning techniques, better prompting techniques, that you actually can get the value from every model, but especially the ones that are the best in the market. So I do actually think that's where kind of teaching kids and teachers that like and for you all here, that this is a new language. This is a new computer science language.


    23:21

    Amanda Bickerstaff
    Even though it feels like it's just talking, it actually does make a huge difference to actually have this opportunity to learn how to reframe, learn how to ask good questions, know how to take feedback, know how to be patient. Because sometimes these tools can be a little bit frustrating. But I do think it gets really interesting when you think about that. That's where the AI literacy component can come in, which I think is really important.


    23:45

    Eric Hudson
    And I totally agree. And you know, I think I. I almost prefer the term sort of question formation over prompt engineering in the sense that, like, especially because I work with educators, like, that's something that teachers do every day no matter what. And I think that's what makes teachers kind of a lot of teachers are good prompters because they know how to form questions. They know how to seek information from their students or from a resource or whatever it is. And I think there's kind of real overlap between the work teachers do every day and being good at using generative AI. Right. I think there's. There's a real opportunity there.


    24:22

    Amanda Bickerstaff
    Yeah, we often are like, you know, you use your superpowers. Your superpowers are patience. It is questioning, it is going to be reframing, it's going to be evaluation and feedback. And I always joke that, like, I'll go up to, like, you don't like, use AI in a way, it's conversational. So I would never go up to Mandy and be like, here's a question, I don't like your answer. I'm going to leave. Like, I'm not going to stop the conversation. I would definitely never do that with a colleague or a student that, like, okay, I don't like it. Let's reframe. And I think that, you know, it's very funny. Everybody in the world, I don't know.


    24:54

    Amanda Bickerstaff
    I've been on, I've had two debates about AI use in the last two weeks on like actual national news in the UK and the US and one of the things that I think is really interesting is because I we're in a shift in durable skills in a moment. But really good prompting or questioning is durable skills. It is critical thinking and evaluation. And I promise you right now, if you teach an 11th grader how to prompt well, they're going to be like, why do I have to do it this way? Why do I have to keep, why do I have to keep reading? And like, you mean I have to keep asking this like a different question the same way and evaluate.


    25:27

    Amanda Bickerstaff
    And if you can actually get them into where the value proposition of that is the thing that you come out of is going to be so much better and meaningful and creative for you. You actually are building parallel skills that can be incredibly durable, even if generative AI wasn't what were going to be framed in. And I think that's really the opportunity here is just like, how do we shift those mindsets away from this idea that A, it's not for everyone or that it's going to take over, but more like the people that get the most out of AI are the people that are the most creative, the most experimental, the most patient. And like, I think that's where the real power can be.


    26:08

    Eric Hudson
    Totally agree, totally agree.


    26:10

    Amanda Bickerstaff
    So let's look. We're going to shift our road. So all of us are big nerds and so are you all. Okay, can I just say, like our 500 people here dropping some knowledge in the chat and we will definitely get to the Q and A. But like, I just love how much everyone is sharing and best practices. But one of the things I think is really a fascinating moment about human centered is that I, when I taught it was 21st century skills or the four Cs. And now it could be soft skills or durable skills. There's other names for it too. But like all of these, like, you know, like all of those kind of skills that we've been talking about for at least I would say the last two decades.


    26:44

    Amanda Bickerstaff
    And I think we've been okay at doing, I don't know, Mandy, you're probably the one that's most current in the classroom. Like, I mean, we're like, been okay at it.


    26:53

    Mandy DePriest
    Mixed results, I would say. I would say the standards movement is kind of in opposition to that a little bit. When you have these lists of skills, even when you try to incorporate durable skills within that, it can't help but pressure you to feel like you have to teach this list of things you have to know, which is no longer the way we engage with knowledge. Now that we have things like AI that can do a lot of that. Right. And so it's about finding the ways to authentically teach durable skills along with your standards in a way that like, kind of brings those forward. Which. AI can be a wonderful tool for helping you do and redesign assignments and assessments in a way.


    27:30

    Mandy DePriest
    We have our critical analysis activities for AI outputs that will help you boost your critical thinking skills while you're interacting with AI. It can help you redesign assessments. We have some prompts in our prompt library that will help you convert, you know, essays into discussions or will help you put in your assessment and ask it to identify ways that you could make it more resistant to AI. So, you know, leverage that tool to augment your capabilities. And I, and I think we'll get to a good place eventually.


    27:59

    Amanda Bickerstaff
    Absolutely. So I'm going to go to Eric. So when you think about the durable skills. So the World Economic Forum's release in January, like, you know, we're going to have massive disruption. I always love this though. It's like by 2030, 170 million new jobs and 92 million lost jobs, and we don't know what those will be, but we're going to say it very loudly in five years. But I do think that five years is like, we think of like, the scale is going to be about five years to actual disruption. But like, when you think about the opportunity of like, what this is going to mean, one of the things that the World Economic Forum has said is that like, the analytical and durable skills that are going to be the most important.


    28:38

    Amanda Bickerstaff
    So when you think about that human centered approach, let's call it out, what are going to be the skills in the next five to 10 years that we're going to value more than anything else?


    28:46

    Eric Hudson
    Yeah, I mean, it's a great question. I'm like in the middle of a series of my substack about like, you know, what are the Durable, transferable skills related to AI. I mean, I do think first and foremost there is kind of this decision making skill that really matters because there's decisions to be made about when to use AI and when not to use AI, but also how to use AI. And that kind of, that's related to literacy. Right? Like it's really about sort of do we know enough to be able to make decisions. My other thing is there's this wonderful book by Annie Murphy Paul called the Extended Mind. And she looks at a lot of the research about how we leverage our environment, our relationships and our own bodies to extend our cognitive capabilities.


    29:32

    Eric Hudson
    Sort of the classic example that we think of is like people who use their fingers to do math. Right. Or to count. That's an example of sort of leveraging an external tool to help our brain process. I think there's a real opportunity to think about how we help students understand how generative AI can extend their mind. This is that augmentation piece. What are ways that generative AI can help students see things in new ways, can help them concretize abstract ideas, can help them process large amounts of information?


    30:05

    Eric Hudson
    I mean, I think speaking five, ten years from now, I think about this idea that we're going to have wearables with AI inside of them, and this AI assistant is going to be able to take in our surroundings in real time in the same way that we are, and suddenly we're going to be able to engage with this other entity about those things. Well, that's kind of extending our minds in certain ways. Like how do we wrestle with that? What does it mean to work well with that phenomenon? And I think going back to this idea of really teaching students how to keep their own minds at the center and really be self aware and self regulated enough to be able to use the tool for certain things and then rely on themselves for other things.


    30:47

    Eric Hudson
    That's another one of the skills that I think is really going to matter well.


    30:51

    Mandy DePriest
    And also to build on that, what do we do with this extra capacity that we freed up? You know, if we've extended our mind in this other way, are we then going to our phone and scrolling through TikTok or, you know, how, like, how can we meaningfully then engage, you know, in ways that are uniquely human with the knowledge and the environment around us?


    31:12

    Amanda Bickerstaff
    Yeah, and I think that, I mean, I think so. This is something I've been thinking a lot about because we've had, we worked with a set of schools in Atlanta this last week on like, what does assessment become because what we're finding is that the structures themselves of school are one of like what Mandy, you talked about, like the structure of like standards that has really potentially impacted our ability to teach durable skills in meaningful ways is that. I do actually think it's really interesting is that when you build. If you think about our definition of AI ready would be an AI literacy plan for everyone within an organization, including parents, students, teachers and community. And then a set of responsible guidelines.


    31:54

    Amanda Bickerstaff
    And if you have those two things and you're gonna, you're doing well, but what happens is when you do that very quickly you start hitting those structures. That does not allow for. Because our current assessment structures, our high stakes assessments or even in college and like graduate school, the idea of a term paper or a research paper are all. Now there's an easy button for that. There's an easy button for all these things, right? I mean, I guess Best Buy was. Was it Best Buy that had the easy button but. Or staples. Staples has easy button. Well, now we have OpenAI's Easy Button. That what's happening is that it's very clear that there's going to have to be this. Not just an acclimatization, but actually redefinition and a restructuring.


    32:33

    Amanda Bickerstaff
    I think that one of the things that we found really powerful about the idea of the skills of the future is I just think about what are we going to be doing. And I think I've been using this analogy of the 10,000 reps that we have. You think of Malcolm Gladwell to be an expert. You do something 10,000 times and so it's going to be more important than ever is being able to do the first 500 steps of that. The first 500 reps, which are those foundational skill building. We have to set out and understand and be able to set something on a course. Right?


    33:01

    Amanda Bickerstaff
    And for us that means that like if you don't know how to do it on your own, especially a foundational skill, then you should not be using AI as your only output, especially from K to 8, just shouldn't. And if you wanted, if you want to build true expertise, you're going to have to do that on your own, regardless of your age. That's one. But then it's like the middle 500 steps are going to be so important too because your ability to redirect, to evaluate, to critically analyze and give feedback on something that you're creating with AI or not is going to be even more important than it is today. And then the last thing is those last 500. Those last 500 steps that are about your ability to critically analyze if the thing you've created is truly fit for purpose.


    33:44

    Amanda Bickerstaff
    Is this something that is meaningful, fit for purpose? And I think that is where I think school has to go, is that we're going to have to teach kids that even though an AI system can do something for your only ability to set it on the right path, to evaluate it, and to be able to identify if it's worthy is if you add those skills yourselves. And I think that's going to be really important.


    34:04

    Eric Hudson
    I totally. And they have to be. And we have to be explicit with students about those things. And I think this is where a lot of the fundamental things about school that they teach, critical thinking, literacy and other domains like reading and writing, those skills still very much matter. We should still absolutely be teaching those things most often, especially with younger students without the influence of generative AI. But as students get older, I think I work with teachers of writing a lot. I was an English teacher for 12 years, and so schools often put me in a room with their teachers of writing. And I say, look, writing still matters very much. Human driven writing still very much matters. But there is a new category of writing emerging that let's call it hybrid writing, where people are using AI to write.


    34:54

    Eric Hudson
    That's the easy button you talked about, but it's also sort of below the easy button, like different and more complex nuanced collaborations with AI. And I think we do have to introduce older students, high school higher ed, to this idea that hybrid writing is something you are going to encounter in your life. And certain points you might have to decide whether or not human writing or hybrid writing is for you. And sometimes you might be asked for different reasons to hybrid write. And so what does it mean to collaborate effectively in that hybrid way that serves in the context of just knowing what it means to write effectively? I think these are some of the new things that we have to start integrating into those kind of long standing, durable skills that we teach in school.


    35:46

    Amanda Bickerstaff
    And Mandy, like, I mean, I think from your perspective, as you were elementary all the way, right? Like, is how important. Like, and I know we have people from higher ed here, but I think it's going to be so interesting to see the two, like, the areas where higher education, you know, what you're doing to prepare students for a future that is changing so quickly, it's going to be a huge question. And I think that as quickly as you can do AI literacy work with your young people and Also with your staff is going to be important. But I kind of think we think about human centered approaches. K5 Man K6 the amount of time where we can have kids that are captive audiences, that we can limit the amount of technology they have in meaningful ways and, or open it.


    36:30

    Amanda Bickerstaff
    Like I just, I can't get my head, I can't get past the fact of how important I think that time's going to be.


    36:35

    Mandy DePriest
    Well, in elementary kids are just starting to define their identities as learners and how they are going to interact with school and knowledge and these materials. And while we certainly wouldn't want to put a second grader on Chat GPT like you were saying, we can use that age range to kind of start building this awareness and these attitudes and these mindsets that lay the foundation for more meaningful work later on. I think it's important that we distinguish between the way we as adults who have already been through school approach using AI versus how kids who have not yet developed that sensibility and understanding and even just the content abilities.


    37:15

    Mandy DePriest
    Like if I can't write on my own, how am I going to learn to write hybrid with AI, Which I love that notion of treating hybrid writing as its own genre that we maybe intentionally address. But like we can start planting those seeds early so that kids know that this is a thing. We know you've heard of it. We've had anecdotes from what third, fourth grade teachers, Amanda talking about their students awareness of AI. Didn't one of them call it al? I think they were like, do you know?


    37:44

    Amanda Bickerstaff
    Oh yeah. So we do have a funny story where like a second grade teacher walked up to me during a session in North Carolina and was like, I have to tell you a story. My second grader is like, you know, I have a new best friend on Snapchat. And she's like, why are you on Snapchat? You're. You're 8. But then he was like, she's like, yeah, he's super funny, tells me jokes, he gives me advice. And she's like, who is al? What, what 8 year old has a best friend named Al? So he opens up Snapchat on his iPad, of course. And what he's done is he's got AI, the Snapchat AI that's been run by ChatGPT for almost two years and was like, this is my best friend. And like, you know, I think that was.


    38:20

    Amanda Bickerstaff
    And were sitting there and she's like, to use it as a teachable moment. But we asked people and like you Know, we do our professional development sessions. We ask people who has heard of Snapchat AI as an adult? And people are like, no. And like, this has been the. The most popular AI tool for young people across the world for the last two years. Like, and I think that what we don't recognize with that kind of approach sometimes is we think. I think we have more time than we do. And I don't want to discourage anybody, but I do think going back to the idea of the Internet. Internet as more of a correlate here is that we just aren't. Like, if you are buying your kid or your. Your graduate student a new iPhone, because they're.


    38:59

    Amanda Bickerstaff
    They're graduate students, they probably need you to buy their iPhone. Going to have Apple intelligence in it. It's going to have a generative model, right? And I think that's where it gets really interesting. But I'm going to hold just one second. So everyone, we're kind of coming up close to time. Put some questions into the Q and A, know that the recording will share. We'll also make sure you get some of these resources. So don't worry about the chat. We'll make sure. But if you have something specifically for us in the chat, put it in the Q and A. But I think, Eric, you had something you wanted to maybe respond.


    39:30

    Eric Hudson
    I was going to say super quickly on the idea of younger students. I just, I really agree that we need to focus on core skills. And I think the conversation with them is less like, how to use AI and more just that AI is present. I mean, generative AI models are showing up in toys and games, and it's in Roblox. And so I think that also is an age group where parent education feels super important, just making parents of these young students aware. An elementary school teacher said once when I was doing a training, oh, for this age group, we're talking about AI readiness as opposed to AI literacy. And I kind of liked that distinction, is what do young students need to be ready for this world that they're going to enter?


    40:14

    Amanda Bickerstaff
    Yeah, no, absolutely. And I think, I mean, first of all, I just want to say, like, we clearly care, right? Because you all have stayed for like, 40 minutes. So, like, clearly there's like, a very large amount of people caring. But how are we making. I think one of the biggest things that we're learning is like, how are we actually making this stick? So before we go to the questions, I think we just go around. I let Mandy be the proxy for AI for education, but Like, Eric, what do you do that helps this stick? Like, so what are the kind of things that you've seen me most powerful? And then Mandy could do the same. And then we'll go to Q and.


    40:44

    Eric Hudson
    A. I mean, the most powerful stuff I do in my work with teachers is I try not to do one and done stuff. Right. Like I'll work with cohorts of educators over a period of time, sort of on, you know, we'll meet in person, then we'll do zoom a few times. And they're working on projects related to their own workflow, right. And they're thinking about sort of, how might AI help me solve this problem or help me do this new thing? How might I model something for my students? And they get support over a period of time to do that. But I think most importantly, the goal of that work with me is ultimately peer based pd, that they're going to be able to demonstrate to their colleagues what they did, how they did it, what they learned.


    41:28

    Eric Hudson
    I mean, I think this kind of like communal professional learning is so critical to build adult capacity and adult literacy. I mean, we know all the research on pd. I know you all know this. The most effective PD is job embedded, peer based and relevant. Right. And so I think if we can empower teachers and support teachers to ultimately teach each other, I mean, I think that's really the stuff that I've seen work the best.


    41:58

    Amanda Bickerstaff
    Absolutely. And then Mandy.


    42:00

    Mandy DePriest
    Well, I mean, I could say the easy thing, which is giving people hands on practice with a tool. A lot of times it's their first time using it and it's so fun to be there for this kind of awakening moment. But I will, since that's the easy thing, I will go one further and say, you know, we strive really hard to take this approach of like non prescriptive. We're not here to tell you what you have to do. We're here to show you what you can do and then empower you to apply your own professional judgment to that, to decide what that needs to look like best in your context.


    42:31

    Mandy DePriest
    Because I think that immediately breaks some barriers down if people have already kind of written a script in their mind about how they're going to respond and feel about AI use to say, no, we're going to honor your opinion. We're just here to build your AI literacy because it is empowering to you as educators. So that's, I mean, unless, Amanda, there's anything else you would add for us.


    42:50

    Amanda Bickerstaff
    I think the things that we see that really work are we don't expect anyone has AI literacy and also a common understanding. So our flagship professional development, which we've done a gazillion times, we actually won't work with a partner unless some version of that is part of it. And so, and I think that's been really powerful and that's where Manny's talking about those hands on opportunities at level setting. And then I think that like the second thing that we've seen is that our new train, the trainer that we launched in January, is about building internal capacity around gender AI literacy. And I'm pretty sure this is the first time it's ever been done. We have a, we have a course that's just about gen AI literacy training which we have loved putting together.


    43:29

    Amanda Bickerstaff
    But what's been really interesting, it's just been both individuals and then groups of schools of people from schools, districts or faculty working together. And the outcome is actually their own contextual PD built on some of our resources and what they've learned. And you can just see the context shifting that's happening where you can see that people feel more confident, that they feel like they go back. And actually a lot of them we still work with, right, we still do PD with those organizations, but that internal capacity. Because the thing is everybody you might be in here and you were like your faculty or your admin said you're now the AI expert because you're comfortable with learning management systems and now you have to figure out all the things about generative AI while also during a learning management system shift.


    44:14

    Amanda Bickerstaff
    And so what we think is the next stage is really how do you prepare and support the people that are in the change management opportunities that are in the embedded nature because like Eric and us, we come in but we are not able to sustain. And I think that this has to be sustainable.


    44:30

    Eric Hudson
    Yeah, I couldn't agree more. I think the most important role that people like us who go to schools, who do direct work with teachers and leaders and parents and communities, is empowerment. We have to reframe and set a mindset that puts people in a position to really drive the learning forward. Because you're right, we can't be there all the time. As much as I might want to be in all the schools I work with all the time, it's just the only way this scales out is if we all kind of dive in together.


    44:56

    Amanda Bickerstaff
    Yeah, absolutely. I just want to shout out some of our train. The trainer peeps from January are here. So Dale and Wendy and a couple others, we love that you keep hanging out with Us. But I do want to say that, like, this has been, you know, if you have to head out the Reese, we'll do the Q and A now, but we have everything that will be recorded. But I just want to say, like, if you're going to hang out, we appreciate you. But also, I know it's like the middle of the night for some of you, so please feel free to go. But what we're going to do is let's actually start to talk a little bit about the Q A.


    45:26

    Amanda Bickerstaff
    And so one of the ones that I see that really come out, first of all, I will tell you right now that we will definitely do an AI and elementary education focused one. So Flora and a couple other people have been asking Macarena, so look out from that. So we'll definitely do that because we do know it's quite different. And actually our AI literacy guide we're working on, we'll have a whole section on that. And then Mandy has in the back burner a set of new K5 lessons that are similar to our K to 6, similar to our 7 through 12 lessons that Carmen shared in the chat. So know that's coming for you. So that's an easy one. Well, not easy for Mandy, but it.


    46:00

    Mandy DePriest
    Definitely, I'm trying to make them too perfect. I have to.


    46:02

    Amanda Bickerstaff
    Yeah, well, you, but you love it though. I mean, you're an elementary, like you. So I will say everyone, like, we're kind of crazy people about content quality, but we're going to keep pushing. So I, that's that. But then there's a really interesting one about faculty and colleges. And so I think that we've been doing more work with faculty and colleges, but it does look like there's more pushback, I think from faculty in general than there are from like K12 teachers. I think K12 teachers kind of like, let's figure this out. And I'm not going to go fast, but there's actually like, I think a lot more resistance and like shutting it down. And so I do think that like, like karak. Do you have any, like, thoughts about like, how to kind of shift that? I've got a couple.


    46:46

    Amanda Bickerstaff
    But do you have anything that you'd.


    46:47

    Eric Hudson
    Like to share on sort of combating resistance?


    46:50

    Amanda Bickerstaff
    Yes, like, it's like very knee jerk. We're not cold, dead hands, not changing. Kids are not AI.


    46:57

    Eric Hudson
    Like, yeah, I mean, I, I, it sounds like the way you do your flagship and I do my intro are similar. I do think demystifying the tool is so important, like doing some level setting, getting people in a room, letting them sort of think about their goals and try things with the tool, I think is really important and. And trying to kind of lower the temperature around the conversation. Like, in my experience, walking to a room, there's a lot of emotion associated with this technology, and that's valid. But I think once you kind of lower the stakes, let's be playful, let's try some things. Let's kind of think about our own needs rather than these big picture things.


    47:35

    Eric Hudson
    People tend to engage a little bit more, they tend to explore it a little bit more because you're giving them the permission to learn through doing as opposed to kind of talking in the abstract. I find it really unhelpful to start an AI conversation with like, a debate. Do you know what I mean? It's like, how do we kind of explore and we can debate later so that we're all operating off the same kind of information.


    48:05

    Amanda Bickerstaff
    Yeah. I mean, Mandy could talk about it, but the way we frame it is we do like a constructive myths and facts where we hit most of the concerns. And don't let anybody out of the room without, like any. Anyone out without knowing about AI detection. I mean, I think, like, man, do you have anything like that specifically for faculty?


    48:21

    Mandy DePriest
    Well, I don't know if there are any Ted Lasso fans in the chat, but I love his saying about, don't get mad, get curious.


    48:30

    Eric Hudson
    Yeah.


    48:31

    Mandy DePriest
    And I personally have a lot of compassion for higher ed. Like, they've spent years developing their expertise. I think they're much more personally invested in their. Their content area than a lot of K12 teachers are just because of that time sync. And. And this definitely feels like the rug has been pulled out from under them and that all of that work has been devalued. So I like to just kind of give people an opportunity to like, share a story about why they feel the way they do. And, you know, definitely we, in the context of our myths and facts, we try to kind of proactively head off, like, we know what you're thinking about.


    49:03

    Mandy DePriest
    Here's this, you know, and then a lot of times we do get some really rich discussions from the myth and fact section with people talking about their experience and things like that. And so I think just letting people feel heard and. And letting them have that opportunity that you can walk away. We're just going to play with it today. Eric's substack is wonderful. He had one on playfulness that most recently came out. I highly Recommend to everyone, but we're just gonna play today. It's totally low stakes. And then you can walk away and never thinking about again, you know, if you really want to. But hopefully that will kind of light a spark that will lead to just. The door just has to open a crack, you know, and that's a good place to start with.


    49:39

    Amanda Bickerstaff
    Yeah. And I think, I mean, I think for. If you are struggling as, like, saw, like, you know, Kelly doing the hard work as a, as like an academic librarian and others in the Q and A, I think with higher education, one of the things that we found really, like, beneficial is really establishing what the future is going to mean for your young people. And not the future of 20 years or after I retire, but literally. So there's a cengage report from last year that says that 62% of hiring managers are already looking for AI school, AI skills. 70% of them believe that your institution is training these students in AI literacy skills. And then 55% of the students that were the recent college graduates that were, you know, surveyed said 55, that they were actively discouraged from using AI in higher education.


    50:28

    Amanda Bickerstaff
    And so I think that really helps because it's like, we make the joke that everyone, I mean, Eric, you probably remember this, like, you started putting like, I'm great at Microsoft Office on your resume, or like, I'm good at Gmail. We did not stop kids from learning how to use Excel. And what we need to do in this sense is we have to learn how to use these tools as tools and not replacement. And I think that a. That inevitability and that importance of this is that you're. If you're an education, then what is the goal of what you're trying to do? And like, if you are teaching young people, it should be to have happy and healthy lives that are fulfilled and lifelong learning as an opportunity.


    51:07

    Amanda Bickerstaff
    But then I also think by telling people that, like, the idea that AI detectors are not going to work the same way that plagiarism detectors work. So that idea that we could just do what we did with Wikipedia is no longer possible, and that leads into how the technology works, they kind of establish a base of like, oh, wait, I can't just say no. So I have to figure something out. And I will tell you right now, if it's done in the way that Mandy just said and Eric just said, where it's safe, it's experimental, it is not judgy, then what happens is it's a natural shift into like, oh wait, this could actually be helpful. Or I could use this.


    51:42

    Amanda Bickerstaff
    Or man, I've always had this like, you know, term paper that just kids really struggle with or college freshmen, whatever it is, how am I going to do this differently? And I think that's where we get so excited about the approach is that if it's set up the right way, it becomes inevitable that we have to do something. But then what we give you is the on rap into what that something can become without telling you, without dictating what it looks like. And I think that is going to be so important. But like, if you have a, you know, teacher that says, or a faculty says, you can, you're just going to use Turnitin or GPT0 and you're going to catch AI cheating. There's a paper that shows that professors were worse, significantly worse at identifying AI written work than AI detectors.


    52:30

    Amanda Bickerstaff
    And then AI detectors weren't very good either. And the combination of the two were even worse. Like they actually did worse.


    52:36

    Eric Hudson
    That is where I think it does. It is beneficial to kind of, especially with faculty who've been around for a while, like tenured faculty, like, think back on previous examples. Think back to the dawn of the Internet, the smartphone, social media, even the calculator, if you want to go back. All those things met resistance. All those things were deeply disruptive. And rather than debate whether or not they're good or bad, how did we respond to those disruptions? What did we do well? What did we not do well? And how can we apply those reflections to generative AI? I can guarantee you that our initial response to the Internet wouldn't be the same knowing what we know now about it, right? That it was a really power, it could be really powerful, democratizing space. And I think that sort of let's.


    53:23

    Eric Hudson
    This is not kind of coming out of nowhere. We can learn from previous technological disruptions and think about how we want to, you know, improve our approach for this go around.


    53:35

    Amanda Bickerstaff
    Yeah, and Wendy said that too. I mean, we establish, I mean, you know, every, pretty much every professor is generation AI. Like if you're born after 1955, you've had, you know, AI has been something in the world. And if you have any device over the last 20 years, you've interacted with artificial intelligence. So I do think it's pretty interesting. Okay, so we have, let's, we have time for, I think, two more questions. I know that Mandy's looking for that cengage report, but we will also I think it's called the Hireability report, Mandy.


    54:03

    Mandy DePriest
    Like, it was, like, the signature Graduate Employability Report.


    54:06

    Amanda Bickerstaff
    Yeah, that's so. So let's look at. Let's see. Like, what's another good question? Okay, so what about. So let's talk about writing. So I think that the idea of, you know, there's a couple questions about that easy button. Right. For writing. Fascinating. Everybody. You know, what AI is best at is two things that never get put in the same sentence when it comes to academia. Software engineering and writing. They're, like the weirdest bedfellows, but that is what generative AI is the best for. 40% of all generative AI is used to code, and the other. And, like, another 35% is used for written communication. And then there's, like, image and vision and things. But, like, so.


    54:50

    Amanda Bickerstaff
    So, like, what do we have to do to think about, like, both setting up kids to, like, learn how to write, but also to assess them, especially at, like, the higher levels where that AI use is going to kind of ramp up?


    55:02

    Eric Hudson
    Yeah, I mean, I'm see, like, the. When you look at all the research and what makes writing effective, why we teach writing in the first place? All the research points to the importance of process over product. Right? The process we go through in coming up with an idea, composing idea, curating evidence, organizing that evidence, it's far more important than the artifact we produce at the end. And something that I'm seeing teachers of writing start to do is think about really helping students make their process visible and therefore accessible through something like a process portfolio, where students are doing the process components of writing maybe in an AI free way. Right. They're doing it in class, or they're doing it by hand, or they're doing it in small groups, and they're composing that portfolio and then bringing that portfolio with them into the summative assessment.


    55:55

    Eric Hudson
    Because I think when we talk about our worries about the easy button, we have to remember that what is valuable about writing is not what students produce at the end, but the process they go through in getting there. And so if we can intervene in that process as educators, everything I've seen and heard from students is that when we invest and assess them in the process, they are less inclined to hit that easy button at the end and just write the essay, because they put energy and time and thought into their work. And I think it's really important to name that for students when they're invested in and motivated in the task that they've been assigned. They don't want to delegate that work to a bot. They don't want to necessarily hit that easy button.


    56:47

    Eric Hudson
    So if we can kind of foreground the process, help students learn how to make the process visible, learn how to assess the process in a student centered way, I think that's kind of the future of writing, the teaching of writing right now, man.


    57:03

    Amanda Bickerstaff
    Did you have anything additional?


    57:04

    Mandy DePriest
    Well, I love Joseph Moxley's Writing with AI course that he has shared the syllabus for online. You know, Eric. Yeah, I'll link it in the chat, but it's really forward thinking and it's balanced. But he also spends a lot of time building context for students on why you might not want to write with AI. And then he moves into some assignments where you're doing a little bit of both. He looks at plagiarism, things like that in a really integrated way. So I'll drop the link in for all of the writing folks.


    57:34

    Eric Hudson
    I'll just do an extra plug for that. Just because the course, it's open source, the whole thing's available. He breaks this course down to kind of eight creative challenges for students, some of which involve AI, some of which very explicitly do not. And the core question is, what does human agency matter when it comes to writing in the age of AI? And that's a debate that teachers of writing are having right now. And I love the idea of engaging students in that debate.


    58:01

    Amanda Bickerstaff
    Oh my gosh, yes.


    58:02

    Eric Hudson
    Having them engage with teachers and kind of construct some ideas and guidelines based on that. Because it is very disempowering for students to walk into a writing classroom and be told what is good for them and what is not good for them. So that writing course is really about. I want to hear from students and I want you to be rigorous investigating, like what are the implications of this technology? I think it's a great model.


    58:29

    Amanda Bickerstaff
    Absolutely. And we love student voice, we love student agency. I just want to say thank you. We're coming up on time. First of all, I can't believe that half of like 300 people hang out with us the whole time. We appreciate it for an hour. Eric, thank you so much. We love your work. Mandy is definitely a Stan and I am as well. But please check out Eric's work as well. We'll make sure that your way to get in touch with you and your LinkedIn are available in the what becomes our email that we send out tomorrow. I want to say thank you to Mandy, who is always happy to be tapped to do this work. And I just appreciate everybody in the audience like, we can't do this without you.


    59:06

    Amanda Bickerstaff
    And I cannot tell you how amazing it is to have 500 people that genuinely want to help each other and to listen and to engage. Page. We feel very lucky that's the case and just wish you all the best luck in what you're doing. And we just really appreciate you all, and I hope you have, wherever you are, either go to bed, eat some lunch, or have a good morning. We appreciate you all, and we'll see you next time.

Want to partner with AI for Education at your school or district? LEARN HOW