Building Student AI Literacy

As artificial intelligence reshapes the educational landscape, it's essential for educators to develop the skills needed to prepare students for an AI-driven future. In this webinar, "Building Student AI Literacy," we introduced the key strategies and tools necessary to empower teachers to cultivate essential AI literacy skills.

Key topics included:

  • AI Literacy Fundamentals: Understand the crucial role of AI literacy in preparing students for future careers, fostering critical thinking, and promoting responsible digital citizenship in an AI-saturated world.

  • Essential AI Skills for Students: Dive deep into core competencies including:

    •   Understanding AI’s capabilities and limitations

    •   Crafting effective AI prompts

    •   Identifying appropriate use cases for AI

    •   Developing data security and privacy awareness

    •   Exploring ethical considerations in AI use

    •   Critically evaluating AI outputs

  • Curriculum Integration Strategies: Learn innovative approaches to seamlessly incorporate AI literacy into existing curricula through hands-on activities and projects.

Don't miss this opportunity to stay ahead of the curve and prepare your students for the future of education and work, ensuring they can navigate, critically evaluate, and ethically engage with AI technologies in their academic, personal, and future professional lives.

AI Summary Notes:

📊 Current AI Usage in Education (08:22 - 19:35)

  • Approximately 50% of K-12 students have used generative AI tools

  • Higher education students use AI more frequently

  • Students use AI weekly for various tasks, including writing essays and studying

  • Over 50% of employers look for AI literacy skills in hiring

  • 55% of college students were discouraged from using generative AI

🚀 AI Impact on Future Jobs (19:35 - 29:59)

  • New AI-related jobs emerging: AI/ML specialists, data curators, prompt engineers

  • AI literacy is crucial for students' future success

  • AI literacy fosters critical thinking skills

  • Ethical use of AI is emphasized over just focusing on cheating concerns

🔍 Building AI Literacy (29:59 - 41:49)

  • Two approaches: modeling AI use and learning with AI

  • Example: 'Catch the bot' game for 5th graders to improve writing skills

  • Importance of understanding AI limitations and hallucinations

  • Age-appropriate usage of AI tools is crucial

  • Legal and developmental considerations for AI use by students under 13

🛠️ Strategies for AI Integration (41:49 - 50:30)

  • Modeling AI use is important, especially for younger students

  • Critical evaluation of AI outputs is essential

  • Comparing human-generated and AI-generated content

  • Importance of subject area expertise in effective AI use

🧭 Ethical AI Use and Future Implications (50:30 - 58:50)

  • Introduction of 'Should I Use AI?' guide

  • Steps for ethical AI use: ask permission, choose right tool, track progress

  • Importance of transparency and output verification

  • AI literacy is imperative for preparing students for a transformed future

  • Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Corey Layne Crouch

    Corey is the Chief Program Officer and a former high school English teacher, school principal, and edtech executive. She has over 20 years of experience leading classrooms, schools, and district teams to transformative change focused on equity and access for all students. As a founding public charter school leader, she ensured that 100% of seniors were accepted to a four-year college. Her focus now lies in assessing the broader K-16 edtech ecosystem, uniting stakeholders at all levels to build a more equitable and abundant future for all. She holds an MBA from Rice University and a BA from Rowan University.

  • Amanda Bickerstaff
    Hi, everyone. We're excited to have you here today. We're going to give everyone just a moment to get in as it takes a little bit of time. But we're really excited to talk to you today about student AI literacy. It's something that we care a lot about here at AI for education. And I'm really excited to have Corey, our chief Prava officer. This is our first webinar together, which is amazing. Really excited to have you all here. As you know, if you've joined us before, we love it for you to get involved, share resources, and also drop in, as we already started to do, into the chat, where you're from, what you do. And we're actually going to launch a poll in just a moment about a couple questions that we have for all of you.

    Amanda Bickerstaff
    And I think that we have just about everybody here that's ready to go. So we're going to launch a poll right now. We'd love to know more about you. So what do you do? But then we have two questions to get us started around how many kids do you think are actually using generative AI already, as well as how do you think students are using AI? So if you don't mind taking a little bit of time to answer these questions, we really appreciate that as we get started. And we're really excited to have you all today. I'm Amanda. I'm the CEO and co founder of AI for education. Former teacher is someone that we have, is someone that has always advocated for student voice. It's something that we have had as a part of our business the entire time.


    Amanda Bickerstaff
    And we're really going to be focusing today on building student AI literacy. And so I'm really excited to have Corey here with me. And if you didn't hear before, she is this is our first webinar together and so really excited to have you here and love you to introduce yourself.
    Corey Layne Crouch
    Thanks, Amanda. I can't believe that this is our first webinar together. I'm excited to be here, too. Hi, everybody. I'm Corey. I'm the chief program officer here at AI for education. Like Amanda said, I am a former high school English teacher myself. I actually though got my degree in elementary education, became a high school english teacher, and then I was also a founding school leader for six through twelve school. And then after my school leadership days, I have spent time in ed Tech and in school design and really supporting innovation and supporting the integration of technology and emerging technologies into school models for the purpose of effective teaching and learning. So I'm excited to talk about this topic together today.


    Amanda Bickerstaff
    Absolutely. And so what we're going to do is if you could just take, almost everyone has answered the poll, so we'll give just another minute for doing that. And so the ultimate goal of today is we're going to be looking at what's happening in our schools and systems already, how we can start to think about what AI literacy is and can be, and then some strategies for you to start building that within your practice. And so I'm going to stop the poll now. If you didn't have a chance to respond, that's okay. You can just drop that in. Everyone's doing it. Like, there we go, everyone's getting that last minute in, which is great. So I'm going to end the poll and I'm going to share the results with everybody. And so we have love the sea.


    Amanda Bickerstaff
    About half of, about 40% of people here are nk twelve, but we've got higher ed. And then we also have some of our nonprofit and edtech friends, and I always love our others. We definitely have to figure out what the others mean and start adding that. And then we ask the question of what percentage of students do you think use AI? And it looks like the majority of you are somewhere between the more than 10%. It is interesting to see people that believe that more than three quarters of students are using these tools. And then also, I love this. And so we asked a lot of questions and we did ask that kind of red, you know, that one that's really big in people's mind, which is the idea of cheating.


    Amanda Bickerstaff
    We've got studying, homework actually being the number one that you all think cheating on projects, but then also creative, entertainment, social, emotional and entrepreneurial. So this is really great to see what we're thinking about. So if we're going to move to the next slide, let's actually talk about what's happening. There's been more and more in terms of what we know about students use of AI. And so this is from December 2023. There's been newer research that shows a very similar amount of students, but about half of students, K twelve, this is more focused on high school, have said that they have used generative AI tools. And this is going to be mirrored in the UK where 79% of seven to 17 year olds said they used some sort of generative AI last year.


    Amanda Bickerstaff
    So we are seeing somewhere probably between a half and three quarters of students are using generative AI. We know this number goes up in higher education. We know that students in higher education are using this more, but we're also gonna look at the fact that they're using tools that go beyond just chast. And so if we go to the second case, we also know that they're using it really often. And so this is something that is now pretty sticky with students. We see them using it. This is from the rand did a follow up survey. This is from February of this year, where students were saying that they're using these tools at least weekly. And what you'll notice again is that they're going to be k twelve that higher. The undergrads themselves are using it the most.


    Amanda Bickerstaff
    And so what we have here is absolutely an opportunity to really start to think about this is already here. It's happening. Our students are experiencing generative AI tools, often the case without any support or trading. So they're using these tools just as they come out of the package or a kid sent them a TikTok or a WhatsApp message about how to use a tool to do something. And so they're using these tools pretty often and without any true support. So we go to the next slide. This is actually really interesting to see how they're using the tools. And so what you'll notice again, this is k twelve and undergrad. So help writing essays and other writing assessments, studying for tests and quizzes. You can see how much that goes up in our higher ed students completing other types of schoolwork, deepening subject knowledge.


    Amanda Bickerstaff
    And you see like, I think it's really interesting for that undergrad because there's so much deeper content that it makes sense as a partner for learning and then creating presentations. We know that teachers also struggle with creating presentations. So it's interesting to see that students are also using these tools. And so you can see that they're using this all across everything from what we think of, which is writing an essay all the way through deepening their content knowledge and even presenting their work. So if we go to the next slide, what's really interesting though is that while we're talking about students, one of the reasons why we believe that AI literacy for students is so important is that we're already in that moment. I don't know if everyone remembers when we started putting Microsoft office on our resume.


    Amanda Bickerstaff
    I don't know if you can remember, Corey, but you said, I'm really good at excel. I even remember when I put Google suite with Gmail. It was cool to have Gmail. That's how old I am. But the reason why we're already at that moment, but for generative AI. So this is a Singh report that was done this year that said that more than half of employers are already looking for hiring people that have basic AI literacy skills. They're even more willing to actually interview someone with AI literacy skills on their resume than someone that does not have it, which is really a remarkable turnaround so quickly. Also, we see a very strong signal of recent graduates believing that AI literacy should have been part of their learning in college.


    Amanda Bickerstaff
    But what we see at the bottom is for our higher ed folks here, is that 55% of those students were discouraged from using generative AI. So there's a really significant disconnect happening between the ways we're supporting students to be prepared for the future. That is fast and coming right now with these AI literacy skills and this real need and tension where more and more hiring is going to be done based on people being augmented. Like being augmented, like words are hard today, are being augmented by generative AI to be able to do more for their employers. So we're going to switch to the next slide, and then it's going to hand it over to why. Like what this could mean.


    Amanda Bickerstaff
    Though there are a lot of people talking about the impact of generative AI on potential jobs, we wanted to focus on the positive here. We do know that there will be a significant amount of jobs that are eliminated and are changed over the next decade based on generative AI, and we already are seeing some major disruption happening. Freelance jobs are down pretty significantly across the board, and we are starting to see impact in things like customer service that are happening today. But on the other side, there's some new jobs that are becoming more and more common that are considered to be stem, but actually are not as deeply technical as they have been in the past. So things like AI and machine learning specialists is really technical.


    Amanda Bickerstaff
    Things like a data curator or trainer, where you're actually helping train models, is something that is about how good you are at evaluating content and how good you are at reading and understanding what people will respond to and what's correct. I prompt engineering is natural language. You've got ethics that's becoming more and more popular. Systems administrators, where you actually help run and support AI systems across your business, all of these things are becoming more possible. So there's both a risk and an opportunity here. But we really want to focus today about how are we really going to set up students for the future by building that foundational AI literacy. So, Corey, over to you.


    Corey Layne Crouch
    Yes, absolutely. Thanks, Amanda. So if you didn't notice already the case that Amanda just shared with us about why AI literacy matters is because we deeply believe that students need the skill set in order to have access and be successful in the future. I know we use the language future ready, and sometimes that can feel a little cliche, but it really is about preparing them, not just for a successful career and workforce, but really for living in the future world and the modern world. It also AI literacy helps foster those critical thinking skills as AI becomes more and more prevalent in media and the world around us. So I know that sometimes, and we're gonna talk a little bit more about what AI literacy means and how to build it, but we might be thinking directly to oh, my.


    Corey Layne Crouch
    Students need to be using AI tools. That's not what we're saying, and we'll talk more about that later. But we do need to be aware that part of this is about knowing what AI can produce and how we need to be critical in what we're consuming, both digitally, in products, but as well as in our community and society. And part of that is promoting responsible and ethical use that is grounded in values to help our students navigate what that future world is going to look like. So what? Oh, before we talk about what AI literacy is building on, that, like another common narrative that we hear when we're working with teachers and leaders and educators like yourselves, is that our immediate concern, and as a former english teacher, I understand how this comes up, is that AI is just about cheating.


    Corey Layne Crouch
    And this is an academic integrity concern. And our conversations, therefore, with students have to be about why it's important to not cheat and not to replace your own thinking with AI generated content. Now, we're not saying don't have that conversation at all by any means, but what we are saying is that AI in schools and building AI literacy is about so much more than the academic integrity component of it. And it's about both positive use cases and opportunities to augment and prepare our students for the future, like were just saying. But we also recognize that there are a lot of risks and limitations for our students that we need to balance as we're building the skillset for ourselves, but then for our students as well. And Amanda, definitely jump in if there's, I know you love the icebergs.


    Amanda Bickerstaff
    I do. I mean, if you know me, which some of you may is, I do love a metaphor. And this is something that came up actually, when we built that, we built this slide and representation after thinking about media literacy, like how does AI literacy support and or extend media literacy? And one of the reasons why we wanted to bring this up is that it is something that if you look at the way we talk about this and even what's going around right now, you may have seen that a student's family in Massachusetts is suing the school district because their student was failed using AI for an assessment, is that it's so easy to kind of take up all the air in the room.


    Amanda Bickerstaff
    And we do know it's a risk, whether that's kids being accused of cheating with AI detectors that are not reliable and it not being true or students failing or along those lines. And we do believe that is really important. But the opportunity to teach about media literacy and digital literacy at the same time you talk about AI literacy means that if we take a step back and actually look at the larger view, things that we've been trying to teach students forever is being critical consumers of technology and critical users to not trust things blindly, to know when and not to use different tools. Like, all of that's been important for as long as we've had technology. So this is an opportunity to really dig into this.


    Amanda Bickerstaff
    And one of the things we've been saying a lot is that, like, digital native does not mean digitally literate. Kids fall for scams more than their elderly counterparts. Like, that's just what happens. AI native does not mean AI literate. These have to be intentional, deliberate actions that we take to build those. And as Kate said in the chat, like, yeah, we need to actually, this actually increases the importance of having these digital literacies. And something like this, what we hope happens, it starts to signal, wait a second. There are so many different ways in which we can start navigating this, whether it's a. From students knowing when and how to use it or knowing when it can cause harm. And I think that's really what we want to do.


    Amanda Bickerstaff
    And we, with this kind of approach today is like having you broaden your mindset so you can have these deeper conversations that are more and more necessary.


    Corey Layne Crouch
    Yeah, it's something that I think about, and I will have to think about who brought this up first. But this idea of literate or native doesn't mean literate. We young people today are born and grow up riding in cars. Cars are around. It's not something that's invented in their lifetime, but also just because they know about cars and they've been in cars. We would never hand a teenager over the keys to a car and say, go for it. You've known about cars and you've been around in them your entire life. No, we have to teach them the skills to be responsible and safe. We model that for them, and then we trust them to be able to be responsible and safe based on what they've learned.


    Amanda Bickerstaff
    Absolutely. I'm sure my parents wish I was better at driving when I started. I'm just going to call that out. I was a terrible driver until I was, like, 18. So I'm sure my parents are like, maybe it would have been better if she had known more about cars before we put her in one.


    Corey Layne Crouch
    We all made some mistakes. My dad had to pay for some. Some, you know, fender bender repairs in my high school years. So we make mistakes. I mean, the metaphor still applies, though. We make mistakes, but we have initial guardrails and understanding of what safe looks like so that we're not going into it blindly.


    Amanda Bickerstaff
    Absolutely.


    Corey Layne Crouch
    So what is AI literacy? We. Part of what we want to do today is get more concrete and specific about what this means and what it looks like. We have this definition from our friends at digital promise, and they have been doing a lot of really helpful work of putting out what AI literacy means and a framework around it as well. The way that we think about it is really the knowledge, the skills, the dispositions and understanding that helps students use AI systems and AI tools, but also live within a world, both a digital world and an actual world, where they can evaluate and understand what those systems are producing.


    Amanda Bickerstaff
    Absolutely. And maybe just go back, because we also have our own definition, too. So I'm actually gonna ask Mandy to drop in our definition as well. That goes. I think we take a slightly different view than digital promise in the sense of we really kind of. We focus a lot on dispositions and mindsets, and so for us, like, we really dive into, like, what it means to know how to use these tools and, like that, these are going to be things that change the way we interact with technology at a fundamental level. And this is why, when we work with schools and districts, we talk about guidance over policy, work to start out with, because guidance is about mindsets, and policy is about compliance. And so we believe very strongly in this. So if you look at the next.


    Amanda Bickerstaff
    That next slide, part of that. There we go. This is our definition of that consists of those knowledge, skills, and dispositions, and we talk about three things safely, ethically, and effectively, which we'll talk about in a moment. But when we look at, like, there are two steps to this. So, Cory, do you want to kind of talk about step one?


    Corey Layne Crouch
    Absolutely. Absolutely. And this is something that I'm sure some of you have heard us or seen us do. It's about building the foundational knowledge of what the technology is and what it isn't. So that is defining what is artificial intelligence, the broader umbrella of it, and what is generative artificial intelligence, and why is it different? And part of that, why it is different from other AI technologies, is the role of the training data and how training data impacts the technologies tendencies towards bias, hallucinations, knowledge cutoffs, et cetera. And that understanding, even if it's high level and basic, we still think it's important to understand what it is, how it's developed, so that we can better understand the capabilities and the limitations of the technology when we're engaging with it. That's the foundation.


    Corey Layne Crouch
    And then we build upon that foundation to develop, to see, we like to say AI in a way that allows you to use it safely, effectively, and ethically. Amanda, do you want to share a little bit more about our C framework?


    Amanda Bickerstaff
    Absolutely. And so I think that Robert just said, knowledge and competency are important building blocks, but then it has to be applied. So, Robert, you were absolutely on the same page with us. So the C framework is something that we think about a lot, because what we have found is that we tend to see literacy frameworks that focus one or two of these, but not all three. And so we actually put safely first because this is something that is really important. When Corey said the importance of training data, AI systems, their fuel is your data. The axiom of, if it's free, you're the product.


    Amanda Bickerstaff
    So when you're using notebook LM to create a podcast, if you're using chat GBT and you're giving all sorts of data about your students or yourself, you're actually feeding that system in terms of the ways in which it's going to be trained and also the ways in which it's going to, like, potentially train other systems. And so the idea of data privacy and security being so important right now is really unprecedented. I was at a training recently where a counselor was so excited about the thought of being able to create really high quality reference letters that she uploaded an entire student's resume with all of their information, their name, their location, everything about that student. And I went over, and she was like, this was great. And I was like, did you use the student's real information? And she said, yes.


    Amanda Bickerstaff
    And it wasn't like she had intended. This is the thing. This is not intentional. She wasn't trying to share the student's knowledge, but it was so unique of an opportunity to do this work. More productive, faster, but also potentially a better way of actually making it more supportive of that specific student that she gave up a lot of that student data privacy, and, in fact, could have been out of compliance with state and federal rules around student privacy. So we have to think about that safety the same way that our students are potentially giving up a lot of their information. If you've used notebook LM, which is like, we can drop the video of what we did, you can create a podcast about things like your life. And one of the Google says, okay, put in your journals.


    Amanda Bickerstaff
    Well, if you put in your journals, you're giving up so much personally identifiable information about you that's connected to you into a system that is going to be using that to train. So something to consider about the idea of safety, you need to also identify which tools you're using. There are so many fake chat GPTs out there. In fact, when we do trainings and someone goes to their phone and they look for chat GPT, if you want to try this out right now, go to your app store on Google or Android or on your iPhone and just type chat GBT. And what I want you to look at is how many fake chat gbts there are there.


    Amanda Bickerstaff
    And some of them ask you to pay, but some of them are using that for phishing, meaning that they're taking information, and there's potentially an opportunity to actually take your information. So the idea of even knowing what is real and what to use is important. And then the last thing is going to be this idea of how do we start thinking about the safety, especially for students, around the relationships that they're creating with chatbots. The third most popular chatbot is actually not quad, after Gemini and chat. It's going to be character AI, which is 18 million avatars that you can talk to. We've got, if you look up, again, if you're working with students of high school age or in college, look up AI girlfriends, look up AI boyfriends.


    Amanda Bickerstaff
    And so the idea of being able to find that division between, you know, like, safety safely, having relationships with humans, and then knowing that these on the other side, this is a technology that's been designed to be persuasive. It's designed to tell you what you want to hear, and it is not going to be thinking, or it's going to not have those reasoning skills. So that idea of using them safely, that even goes down to that level is going to be really important.


    Corey Layne Crouch
    And that piece, if I can add, is even on top of when think about the conversation of what's coming more to light about the impact of products and tools and smartphones, etcetera, on the mental health of young people and isolation. And part of this is help it building student understanding that we want them to be critical consumers of the technologies, yes. In their educational and professional careers, but also just for themselves as whole pupils, the whole people and their wellness, and really thinking about what serves them as humans and human connection, both in school and outside of school and in their social relationships as well. So this is about. Yes, in the classroom, but also certainly to the iceberg. It's much broader than that.


    Amanda Bickerstaff
    Absolutely. And I think it's also just a cool space to get into a student around, like how they're using and how they're thinking about it. Because you can learn so much, too, about why students are going to bots for AI girlfriends and boyfriends. Why aren't we know that there is a, like a social loneliness epidemic that's happening in our young people, especially after Covid. And so there are really strong reasons. And there's some research that shows that, like balance use can be supportive, but it's something that should be intentional in discussion that's had. The second thing is effectively, and we see this all the time. You know what the biggest thing that we see when we do our trainings is showing people not just giving people the information, but actually having them first learn about AI and then learn with AI.


    Amanda Bickerstaff
    But even more than that, learn with AI in a meaningful, effective way will change.


    Corey Layne Crouch
    It is radical.


    Amanda Bickerstaff
    What you will see in terms of people's really deep knowledge. Students hearing about generative AI is going to be very different than having it being modeled or actually having them if it's age appropriate, using the tools. And that is when you are typing into chats. VT, you are coding with natural language. I've been joking lately. We're going to get everybody in our training a junior software engineer badge because you are like, the new language of coding is English and or other languages. So this idea of understanding that the way we ask questions, the way we give feedback, the way we structure our conversations with the chatbot, has an enormous impact on the value and the reliability of the output. So seeing that is really important, you also need to always understand that you have to evaluate the AI outputs.


    Amanda Bickerstaff
    In fact, were in a district just this week, and there was a student advisory panel where the kids brought up multiple times that they knew that the AI was wrong and would be wrong, but they didn't know how to evaluate it because they didn't have enough expertise. So the idea actually, that they're recognizing that there will be outputs that are incorrect or hallucinating, but they needed the tools to understand how to avoid that or how to double check their work. The third piece is knowing when and when not to use. Sometimes it's better to, like, pull it off, like, start somewhere and then pull it off or come in fully prepared or, you know, and then start using these chat bots that it's not a. And it's different for each person. I know that Corey and I use generative AI differently.


    Amanda Bickerstaff
    I know that Mandy and our team uses it differently than we both do. It really is about our specific need. This idea of ieps for everyone matching what the individual needs with these tools is going to be more and more important. And the last thing is just where is the most worth here, especially at this early stage, there is this idea of either the jagged frontier or discoverability of these tools. It is new, it is weird. It doesn't always work as expected. You actually have to use the tools to figure out what it's actually good at and what it isn't. So that, you know, to go back to and start using those tools in that way. And then maybe in six months, you try again that thing you always wanted to do.


    Amanda Bickerstaff
    And probably the technology will have caught up to your needs by then. So, Cory, you want to talk a little bit about effectively?


    Corey Layne Crouch
    Yes, absolutely. You're making me think of one of the trainings we've done in the past couple of months when I was working with a group of district leaders and the communications department was what they were trying to do with their prompting was to essentially rewrite a marketing strategy for a specific campaign. But they had are they had already, and it was a test case, but they had already come up with most of the campaign. And so they said to me, well, this is just taking more time because we already did it. And I have to, I'm like, learning how to prompt the AI to get what I want. But why would I do that if I've already done all of this thinking? And I said, I have the same question for you.


    Corey Layne Crouch
    So that is just an example of thinking about effectively knowing when to use the tools. And part of identifying best use cases is thinking about your and what you've already done. And so what I did with them was say, okay, you've already done all of this work. What are the things that you want to improve upon or expound upon or that you feel like you could really benefit from a different perspective or a different point of view. So I was giving them some ways, other ways to think about building on their expertise and the work that they had already done. So just doing that is part of effective use, knowing when knowing what you need as an individual. Should we go on to ethically?


    Amanda Bickerstaff
    Absolutely. And so, ethically, I think, is where we all start with.


    Corey Layne Crouch
    Right.


    Amanda Bickerstaff
    Which is funny that we end with it because we believe it's everything. It actually could be a part of all of this. But to talk about it just specifically is that, first of all, is this an awareness of this goes way beyond just academic integrity. We absolutely need to teach students how to use this in a academically honest way and an ethical way that is undeniable. But they also need to understand what they're using it for and how, and so, and then what the impact could be on others. And so the first thing that we talk about all the time in that ethical use is just being transparent. Ask permission, share what you've done, even share the prompts.


    Amanda Bickerstaff
    Like, get into a place where students, especially those that are age appropriate, feel confident asking you if they can use AI without feeling like you're going to be mad at them or think that they're cheating or that they're lazy. Like, actually that this is something where at least the conversation, and that doesn't mean we say yes, it just means that the conversation is open so that students are not hiding their use and or avoiding it completely, especially if they're thinking about moving into college and career. The third is the idea of a do no harm mindset. And I want to bring up deepfakes here because a deepfake where we use technology to take a human and put them in another situation, have them say something they didn't say, et cetera, is something that's been around for as long as there's been this recent technology.


    Amanda Bickerstaff
    But the idea, though, is that it's becoming more and more sophisticated, where deepfakes are almost indistinguishable from the person. And so when we talk about that, though, some of them are pretty funny, like the pope wearing the puffy coat last year was kind of funny. But the thing is, though, is that students might think it's funny to create a fun image of their friend or their teacher wearing a t shirt, and that seems like it's okay, but without permission from that person, what could happen is if, like, that picture of the t shirt on the teacher could be shared with a parent who thinks, wait a second, that's not very professional for them to be wearing at school. And all of a sudden, something that seems very fun and even potentially affectionate can become something damaging.


    Amanda Bickerstaff
    That's on the lower end of what deepfakes can do. But on the other end, we have seen so many instances of young girls and young boys having their images be turned into nudes and deep fake news. And so underage kids are being done this way. We have the example in Westville, New Jersey. There was an example in Spain. There was an example in South Korea. But more recently, were working with a set of schools in Latin America. And one of the heads of schools daughter was part of a group chat that went on for two months, everybody, two full months, where they were sharing deepfake nudes and deepfakes of young girls, and no one felt like they could talk about it. And finally, when the head of school, the father, saw it, they stopped it and had conversations.


    Amanda Bickerstaff
    But what was crazy about it is that his daughter was one of the kids that was actually impacted. When he asked her, why did you say anything? She said, it's not a big deal. We didn't want to make a big deal of it. And yet this is something that was pretty damaging and was something that went on for months without anyone doing it. So the idea of that do no harm piece is going to be really important. And then that last piece is just a critical evaluation of outputs, that they are not biased, that they are not going to replicate misinformation or disinformation, like that's going to be very important as we go forward.


    Corey Layne Crouch
    So.


    Amanda Bickerstaff
    Yeah, go ahead.


    Corey Layne Crouch
    I also appreciate, I just want to name Ralph, what you just surfaced in the chat as well. That part of the ethical conversation and part of understanding what generative AI is understanding the environmental impacts when we are using it and what that may mean for how we use it, what we expect of the organizations and companies that are building the technology, and how we think about making sure that we are getting the benefits of the technology and growing as a society, but we're still doing so and thinking about how to do it in a sustainable way, similar to a lot of other advancements that were living with.


    Amanda Bickerstaff
    Mandy, can you drop in the climate guide, actually, so we have a climate guide for classroom discussion on this, on the environmental impact, so we can drop that in. So this is our C framework and we will be sharing more about this with you all. We're actually building a set of resources around this. We're just getting started with this. And so give us your feedback and then also look out for more resources. But now we're going to kind of go over to like we know that the conversation situates on cheating. So how do we go beyond that? Corey?


    Corey Layne Crouch
    Oh, yes, absolutely. I'm sitting here thinking about, and you'll have to humor my optimistic view of the world and optimism for the future. But when I think about setting students up with foundational AI literacy skills, I really think about all of those pieces, but the ethical piece of it and how that gives me a lot more hope and honestly kind of calm some fears I might have about what the future could look like. Because the ethics, when we take the conversation beyond cheating and really talk about ethical use and safe use and effective use, then what we're doing is setting our future leaders up with the skillset and the mindsets to make sure that we can see the positive benefits of the technology in the future versus much of the risks or the fears about what might happen.


    Corey Layne Crouch
    But the how do we get beyond cheating? Or why does it make sense to get beyond this academic integrity conversation? Well, I know that, especially if you're a current classroom teacher, you were a classroom teacher. Maybe you are grappling with this right now, but making the conversation about AI, just when was it okay to use versus not okay to use? Did you use it inappropriately? Etcetera automatically creates one. It puts a strain on the relationship between teachers and students, and it also automatically makes the use of AI. As a high school student described it to me, this idea of forbidden fruit, literally, that was the language that an 11th grader used when she was talking to me about it last year.


    Corey Layne Crouch
    When we're thinking about it like that and students are thinking about it like that, there isn't open conversation, there isn't clarity for students about how they could be using it to support their own learning and their own deepening of subject matter knowledge. It's exhausting for teachers to feel like you constantly have to be wondering if students are submitting AI generated work and it just makes it something where it's negative and it doesn't open up the possibility for really exploring and learning together in classrooms. So that's why we encourage as part of building AI literacy that we really get beyond this conversation that is just cheating.


    Amanda Bickerstaff
    Yeah. And can I just say, this idea of reframing is that so many kids we've talked to said the first time I use aihdem, I actually used it unethically without realizing it. I didn't know what the capabilities were. I went in and like, okay, help me do this. And then it did. It for me. And then I had. And then after I was like, oh, wait a second, that's actually like, not okay. I shouldn't have had it do it for me that they didn't intentionally try to cognitive offload everything.


    Amanda Bickerstaff
    But because the technology is so capable, I mean, we have that graphing calculator for the humanities now, that magic button, and that it is so different and it is so like bigger and more beyond in terms of writing capability that the idea of the first time, it could genuinely not be intentional, especially as these tools become significantly more embedded. Like what? But you told me I could use Grammarly. I have permission to use Grammarly. And now Grammarly go is generative. How is that bad? Like, that is the idea here that I think that kind of transparency and reframe in the conversation is going to be so important because we can't expect. We can't expect that students actually know where that line is unless we actually talk about that line and provide time to understand both sides of it.


    Corey Layne Crouch
    And I also think about the students, and I would have been this, especially in high school, we have some students that are. That are using it and maybe accidentally using it inappropriately without knowing. And then you also have students, if the conversation is just about AI is used for cheating, then they won't touch it at all because they have a lot of fear about getting in trouble or they're more of the type a achiever students, where they want to make sure they're not doing anything to jeopardize their academic career. And so it's actually a disservice for those students, too, because per how we kicked off this webinar, we know that students need the AI literacy and the skillset to effectively use the technology and use it safely and ethically.


    Corey Layne Crouch
    So if there are some students that are like, look, I just don't want to get in trouble, so I'm not touching it for my entire high school career, then that's a disservice to them as well. Okay, shall we talk a little bit more about what are some ways to then integrate?


    Amanda Bickerstaff
    Yeah, let's do it. So let's get into. Now we're getting into the meat of, like, the application side. You know, we know that we love to do some application tactical, practical stuff, and so we wanted to just like, here's our cool teacher that's talking about AI. Definitely generative AI generated. One of the things that we really want to kind of talk about is this idea that I think people, when they hear us talking about AI literacy, they think students and ten year olds on laptops typing away into chat, DBT. That is not what we are suggesting, everybody. The same way that we're asking for students to learn how to be balanced in their own AI approaches, we're suggesting that with you as well. These tools are very nascent. They are early, they are not reliable, and they can be harmful to young people.


    Amanda Bickerstaff
    What we want to do, though, is we want to think of that. There are two options here about if you learn about AI, like, how are we going to learn with AI? One option with our younger students, and even at the beginning of the older students journey, is modeling. Modeling is so important to do. And so modeling, we actually do a think aloud with adults, which is so funny. Yeah, I'm up there and I'm like, here, superintendents, let's do a think aloud. And they're like, she's crazy. But then they watch what I do, and it actually starts to build that practical knowledge of what's possible, because what we're doing is we're expanding the knowledge of what's possible with technology. So modeling is a great way to start showing best practices and even starting to identify when and when not to trust.


    Amanda Bickerstaff
    And so I'll give you two examples. I'll give you one that I'm asked Cory to give you one, which I'm. But the one I'm going to give you is that there is a fifth grade teacher in Kentucky. I was on a webinar with him, so he was the teacher of the year, and he had a lot of trouble with his fifth graders. They had a sentence of the week. They were not very engaged. They kept turning in these, like, not very good sentences. If you're a fifth grade teacher, I think you can feel the pain from here. And so he was like, okay, I'm going to really start thinking about how to start building AI literacy and start adopting these tools.


    Amanda Bickerstaff
    So what he did is he had a catch the bot game, and what happened is he would go to chat GBT every week with the same sentence frame, say, write a sentence. And then the students had to submit their sentences, and they had to be like, we're supposed to try to find the bot. And if they could find the bot, they lost because it was too easy. And so what ended up happening is the bot never spelled things wrong. It never had grammatical mistakes. It always met the needs of the prompt. And so what ended up happening over six weeks is the students started writing perfect sentences because they didn't want to get caught. And something like, that was like, he modeled how he did it. He showed it. He talked about how it can be done.


    Amanda Bickerstaff
    And he used it kind of as a lever to build this idea of, like, starting to build, like, that literacy skill, but also, like, how it can use, but also what it does and doesn't do. And so the kids were able to start doing that. And what's great is because it was just a sentence. Like, they were like, pretty, like, fifth grade sentences. And so it was such a thing that could be a really cool little tool and way to model. And then, Corey, I'd love to talk to you about how we started. We always ask our people, our trainees, to try out, what are you chat GBT? But we've started adding a second prompt. Do you want to talk a little bit about that?


    Corey Layne Crouch
    Oh, yes, sure. So in our trainings, we start with, what are you chat GPT? And then I always like to ask, what is your knowledge cutoff date? One of the pieces of understanding the limitations is understanding that there are knowledge cutoffs for when the models are trained to. And I encourage you all to try it out yourselves, because guarantee, even if there are only five people in the room, chat GPT is going to give us different answers. So it highlights for the audience very quickly, one, that chat CPT does not give consistent answers. Two, it doesn't even know a very well known fact about it, and so it's not fact checking. And that. So thus, it's a hallucination. So depending upon the size of the group we'll get, some of them get the correct answer, October 2023, but others will get April 2023.


    Corey Layne Crouch
    Some get September 2021. Yeah. Etcetera. And so that is a quick way that we model for the adults that we're working with about hallucinations, inconsistencies, and responses, even on a very well known fact.


    Amanda Bickerstaff
    Absolutely.


    Corey Layne Crouch
    Yeah.


    Amanda Bickerstaff
    It's just kind of one of those pieces where just showing people, like, we really press everyone to, like, not stop over the first prompt. Like, we do this. But we also, these are the ways to establish the need for, like, really critically evaluating outputs. And so. But I also just want to, before we go on, Susan has written about this idea of deficit mindsets around academic integrity to, like, more asset focused. And we totally agree with this. And this is why, like, we love this idea of the modeling aspect because some of it is, like, we can model how we use it as adults and what we don't use it for and the mistakes we've made, because that also helps with students understanding, like, okay, if that. But then this is the way it did work.


    Amanda Bickerstaff
    So, like, I was going on a trip and I used AI to help me do this. Like, you know, there is this crazy opportunity to start thinking about, oh, wait a second, you know, I could plan the best birthday party for my parents, or I can connect with my grandmother. He speaks a different language that I do, or, like, these ideas are the places that we can go with this that become more than just academics, but like part of our lives and part of our ability to, like, communicate and to collaborate and to build, which I think is really going to be an awesome opportunity going forward.


    Corey Layne Crouch
    Well, and we like to highlight sometimes. I know I do. Amanda, you do a great job of reminding me that people need to know this. We highlight age appropriate usage for these tools, and while we have some of the popular foundational models here, this also applies, we believe, to the application layers, as well as being aware of what we need to model and how to set students up for success in using them. There is a difference between what is legally allowed and what we believe is age and developmentally appropriate. And so we like to say modeling versus jumping directly to student use.


    Corey Layne Crouch
    One, because it builds AI literacy before students are independently using the tools, but two, because these tools actually are not yet okay and appropriate for the under 13 to use, and even with 13 to 17, it should be with parental consent. So this is important to know. We do not. Well, that's a little aggressive. Sorry. But we sometimes we hear about elementary schools are under the age of 13 and students using chat GPT kind of solo, or diving into some of the other tools a solo. And we think it is very important to know that it's not appropriate for that age level to be using it by themselves. That's why the modeling is so important.


    Amanda Bickerstaff
    Absolutely. And I just wanted to put a point on when Corey says application layer, what we mean is like the applications like magic school or difit or school AI are essentially applications on top of the models that run chat. Claude also, we don't have it up here, but llama, which is meta AI, and then we have also Google Gemini.


    Corey Layne Crouch
    I was going to say, yeah, I realize we're missing Gemini, but also Gemini.


    Amanda Bickerstaff
    At this stage is still 18 and over. But I think that this is where it gets really interesting, because people generally don't realize that the tools themselves that run the models that you might be using with students also have their own age ranges. So terms of service is really important. So it's just something we just want to establish when we start sharing, like Chicago's guidance that has this chart net. This is a thing that people pull back on because I guess it just feels like it's so ubiquitous. Everyone's heard of chat GBT that it would just be available to everyone. But it's still going to be something that even the systems themselves are indicating that it should not be used under 13. And even if it's used for those kind of teenage years, it should be with parental permission. So we're going to roll.


    Amanda Bickerstaff
    We had just a little bit of time left.


    Corey Layne Crouch
    Oh, gosh. Yeah, we're running. One thing I'll say, too. I know this comes up in conversation with our team, and sometimes when we're talking to educators and leaders, there's a little bit of a well, we know that they're using it at home, or we know that they're using it already. Sure, there's only so much locus of control, and maybe some of your elementary or under 13 students are using tools at home. That doesn't mean we just throw in the towel and let them have at it. We have control over what's happening in our classrooms and in our school buildings. And that is even more.


    Corey Layne Crouch
    It's my perspective that's more of a reason why building this foundation of safe, ethical and effective is important, because it gives students more tool sets to make the decisions about whether or not they should be engaging at home and if they are choosing to do that or their parents are letting them do that, how to do it in a way that is safe.


    Amanda Bickerstaff
    Absolutely.


    Corey Layne Crouch
    But what about the 13 plus? Go ahead, Amanda.


    Amanda Bickerstaff
    So, yeah, so if you're actually thinking about. And I will say that this is, we will say that these critical analyses pieces actually go, then go up or down. You can actually have students do the analyses post having done it, so they're not actively prompting, but they could be looking at the evaluative pieces. So we're going to drop into the chat, a critical evaluation activity. And we also have a prompt too, that we can drop in. The chat is. Here are some ways in which you can actually have students start to evaluate AI outputs. Things like being able to compare a human generated output and an AI generated output, or comparing multiple AI generated outputs and looking for, hey, what makes this uniquely human? If anything, what are their common themes or common applications? Does it matter? The quality of the prompt?


    Amanda Bickerstaff
    Like, can we actually change the voice into ours? Is that possible? Are we going to be able to identify areas in which it's clear to know when misinformation or hallucinations can happen. Can we double check the work and know what we should be asking and not. But the idea of actually starting to look into these tools and starting to evaluate, we talked about, like, Cory did such a good job of putting a pin on that idea of human centered. Like, that human centered component of, like, you know, AI can help you get a pretty average draft of something. But, like, how, if I did that, it actually is sometimes harder to make that draft your own in your own voice than it is to just write it yourself, especially if you're not very good at prompting yet.


    Amanda Bickerstaff
    So, like, that idea of, like, I don't know if Corey was an english teacher. I remember I was a science teacher, but I remember our 9th grade, you know, in New York City, the first thing that kids did was memoirs in English. And it was so hard to teach kids about voice and to have them crystallize what made them truly themselves. Like giving them the most generic output from an AI and asking them to figure out how to make it their own not only establishes that, hey, second, if you're just going to submit something that's AI generated, you're going to get caught. Absolutely. It's going to be easy to see. But also, how do you make this original? How do you evaluate, how do you critically think about what's your voice? How do you critically think about how to make it your own?


    Amanda Bickerstaff
    Because more and more, our world is going to shift from a place in which we generate everything as humans, everything written to something where we evaluate, we edit, we analyze, we make our own, we brainstorm, we start. Like, that is going to be a really unique, like, place that we're going to dig a lot more into. So if you think about, like, editing, like becoming an editor of chat, GBT will be a job. And so it's something that I think is really an awesome opportunity to do. And you can use this with younger students, with older students, you can have them actually do this prompting and then do the evaluation, but you can also do it in your staff rooms.


    Amanda Bickerstaff
    If you're a nonprofit professional or an ed tech professional, this is a wonderful way to start building the AI literacy across your entire organization.


    Corey Layne Crouch
    And what it continuously reminds us is that you have to have a certain level of expertise. You have to have that subject area expertise or knowing at least have a sense of what is the output that you're looking for. What does good look like as part of effectively using the tools. So I know, and again, I don't want to undermine any of the fears or the concerns we might have about integrating AI into education, but I know one of them is like, oh, it's going to undermine student mastery of the content at hand. Well, as you, as we all use it more, we continuously see you actually have to have very strong subject area mastery in order to use AI effectively.


    Corey Layne Crouch
    And we like critical analyses because students practice that subject area mastery and the topic at hand, and it also continuously stamps the ideas around AI literacy that they are biased. You have to look for misinformation and that you have to edit the outputs in order to use it effectively.


    Amanda Bickerstaff
    Absolutely. And so we would just say, like, I know that we had a question in the audience from Shannon. We just dropped a curriculum. Our curriculum, it's seven to nine and ten to twelve. We are building out elementary versions. That's on Mandy's list. So we have that there. So the last thing we'll kind of, we're going to kind of like wrap this up with one of the first things we ever wrote. Everybody, this is actually the first version of this guide. Should I use AI is almost, I think it's over a year old. This is something that we wrote because were seeing it like, how do we get teachers to wrap their heads around how we can have students use these tools effectively, but also teach them what responsible means?


    Amanda Bickerstaff
    And so Tess talked about critical analysis, but this is how we can think about teaching students early and often about what the ethical use could look like. But also modeling. I know it's a vintage, it's our prompt, our rubric prompts, or we have ten prompts. We started with when it was, and this is probably two of the earliest things we ever did. Everybody. Also, our intro hasn't changed very much as well. So those are going to be things we've done for a long time, but we've seen this be so effective in terms of, first of all, we love a decision tree as educators, let's be honest, we love something that takes people through mindsets and steps.


    Amanda Bickerstaff
    And so it's the idea that we are actually going to start to see that students can talk about brainstorming and studying and explaining things, or even refining their content. But then, if it's a yes, these are the steps to take. Ask permission, choose the right tool, track your progress, and actually be transparent and think about verifying your outputs. If you're thinking about research, this is still something where hallucinations are rampant. Even in tools like perplexity, that are connected, that are generative, search connected to the Internet. They still make mistakes, it'll still make up a statistic or a date? So are you using the right tools and are you also using secondary sources that are such a huge part of media literacy? And then the final piece is that if you're just having it do your work, that's not what we want.


    Amanda Bickerstaff
    That'll never be accessible. And so we have to think about how to reframe that. So do you need one chat bt help a day or for assignment? If so, let's talk about what it is. I want you to share what you did and cite it. And let's actually look at did it help you? If not, let's talk about how we could do that better next time. And so we're reinforcing those AI literacy opportunities. So if we want to kind of, we're going to wrap because we're coming up on time. So let's just go to the last slide, which, oops, not that one. Like the one. Yeah, this one. The reason why we spend so much time thinking about and talking about student AI literacy and AI literacy in general is that these changes will impact young people the most. It's just how it is.


    Amanda Bickerstaff
    We are all part of the AI generation. All of our lives will be impacted by this transformational technology. But if you are a second grader, a five year old, a two year old, you're going into a world in ten years that'll be radically transformed. And so the idea of building AI literacy into the ways in which we support our students is something that is not just necessary, but imperative. This is something we have to do. It's something that we need to do consistently and intentionally. And this is why things like National AI Literacy Day, which will be on March 38 28th, that we are supporting again this year, as well as some work that we're going to be doing more focused on kind of a global approach. Like there's going to be some really important pieces.


    Amanda Bickerstaff
    But we hope today has been meaningful and supportive of what we've done. We've got more resources we will be able to share. But as always, you can hang out with us on. You know, we have our Linkedins, we've got our website and our webinars. But we just really appreciate you all for being here. And I hope that this really has spurred, because what we want you to do is you're going to be supporting these students. So we hope that this has been helpful to you all and hope you all have a great morning, evening, middle of the night, wherever you are. But just really appreciate your time today.

    Corey Layne Crouch
    Thank you all for being with us.

    Amanda Bickerstaff
    Thanks, everybody.

Want to partner with AI for Education at your school or district? LEARN HOW