Navigating Ethics in Education: From Deepfakes to Academic Integrity

As GenAI transforms education, educators face difficult ethical questions about how these technologies impact students. In this webinar, we explored these critical issues and how the SEE approach to AI literacy—using generative AI Safely, Ethically, and Effectively in educational settings - can provide a framework for supporting these efforts.

Key topics included:

  • Deepfakes and Media Literacy: Explore the risks of AI-generated content that looks and sounds real, including identity theft, cyberbullying, and misinformation, and learn strategies to help students navigate this landscape.

  • Environmental Impact of AI: Learn more about GenAI’s environmental impact - from energy consumption and carbon emissions to the potential for resource optimization and research acceleration.

  • Rethinking Academic Integrity: Shift from detection-focused approaches to transparency, with clear AI use guidelines that encourage meaningful integration while maintaining originality and verification.

Ethical AI use is a journey requiring ongoing commitment, with strong AI literacy skills forming the foundation for balancing concerns with opportunities to transform learning, accessibility, and creativity through thoughtful, ethical implementation.

Watch now to develop a more nuanced understanding of the critical issues along with strategies for ethical AI use that empower both educators and students to navigate this complex technological landscape with confidence and integrity.

AI Summary Notes:

This webinar focused on the vital intersection of artificial intelligence and educational ethics. The session began with an introduction to AI literacy and the importance of building community within educational contexts. Corey Lane Crouch highlighted the evolving landscape of academic integrity, noting the stable cheating rates juxtaposed with the rise of AI-assisted methods, and raised concerns about potential biases in AI detection tools. The webinar emphasized the need for proactive strategies, introducing a guidance flowchart to facilitate appropriate AI use among students. An engaging demonstration of deepfake technology illustrated the challenge of distinguishing AI-generated content, while discussions on voice AI underscored ethical implications related to privacy and content verification. Attendees also explored the environmental impact of AI and the broader ethical considerations it entails. Action items were outlined, including the implementation of the flowchart for guiding AI use and plans for upcoming webinars on AI literacy. Overall, the webinar served as a comprehensive resource for educators navigating the complexities of AI in academic settings.

🧠 Introduction to AI Ethics in Education (00:02 - 04:42)

  • Webinar introduced by Amanda, CEO of AI for Education.

  • Focus on AI literacy and community building across educational sectors.

  • Encouragement for participant engagement through chat and Q&A.

🎓 Academic Integrity and AI (04:42 - 19:49)

  • Corey Lane Crouch discussed AI's impact on academic integrity.

  • Cheating rates stable, but methods have shifted to AI usage.

  • Concerns about accessibility and bias in AI detection tools.

📝 Shifting from Reactive to Proactive Strategies (19:49 - 28:50)

  • Proactive strategies recommended over reactive responses.

  • Encouragement to reframe discussions about academic integrity.

  • Introduction of a guidance flowchart for appropriate AI use.

🎭 Deepfakes: Understanding the Technology (28:50 - 33:29)

  • Interactive demonstration of deepfake technology.

  • Participants struggled to identify AI-generated images.

  • Importance of recognizing the convincing nature of deepfakes.

🗣️ Voice AI and Ethical Implications (33:29 - 38:51)

  • Demonstration of a human-like voice AI system.

  • Discussion of privacy concerns and ethical implications.

  • Highlighting the need for content verification.

🌍 Environmental Impact and Broader Ethical Concerns (38:51 - 52:25)

  • Discussion on the environmental costs of AI development.

  • Strategies for addressing deepfakes and promoting media literacy.

  • Takeaways on ethical AI use and the importance of AI literacy.

  • Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Corey Layne Crouch

    Corey is the Chief Program Officer and a former high school English teacher, school principal, and edtech executive. She has over 20 years of experience leading classrooms, schools, and district teams to transformative change focused on equity and access for all students. As a founding public charter school leader, she ensured that 100% of seniors were accepted to a four-year college. Her focus now lies in assessing the broader K-16 edtech ecosystem, uniting stakeholders at all levels to build a more equitable and abundant future for all. She holds an MBA from Rice University and a BA from Rowan University.

  • 00:01
    Amanda Bickerstaff
    Welcome, everyone. I'm Amanda, the CEO and co founder of AI for Education. I'm joined today with our Chief Program Officer, Cory Lane Crouch to talk about something really important, which is the idea of navigating AI ethics in education. And what we're going to look is actually, when we say education, it actually goes even further than education. Although we will be focusing on academic integrity, we will also be talking about deep fakes, we'll be talking about climate impact and a little bit about the larger impact that goes even further. And. And so really excited to have you here today. Love seeing Jennifer here, one of our superintendents we work with and others that we see. So we go to the next slide. We are always like, what we love the most about these sessions is a community of practice that we have built together.


    00:47

    Amanda Bickerstaff
    And so we feel so privileged to be able to do this work with you all, Whether you're in K12 or higher education, you're a teacher, a leader, or even potentially a parent or student. So really appreciate you all being here. And what we want to make sure is that you do one of two things. Number one is if you want to communicate with each other like you're doing right now, please continue to use the chat function. Just make sure you're always replying to everyone because sometimes it only goes to us and we want to make sure that your colleagues get to see it. If you have a question for Corey and I, especially when we have close to 300 people here, please drop that into the Q and A and I'll be watching that the whole way through, like we usually do.


    01:21

    Amanda Bickerstaff
    But just know that. And then if you do have resources, ideas, thoughts, please feel free to always add that into the chat. Dan and our team will also be doing that as well. So you're going to get lots of resources from us. But we're just really excited to have you all here today. And the ultimate goal for us is to actually start talking about the deeper world of AI and ethics in education if we want to go to the next slide, than just academic integrity. And if you have been with us before or you've seen the work we do online, you know that we like a metaphor, but we also really like to move beyond this rhetoric that AI, you know, ethics is only about academic integrity. And I don't know if you have heard AIs for cheating. I know Corey has.


    02:03

    Amanda Bickerstaff
    I know I have. You may have even said it once or twice, but we have this, like, really strong like, like we're just talking about kids using AI to Cheat. And while we do know that there are academic integrity impacts, we want to make sure that when we talk about AI ethics and education, that we really go beyond that rhetoric below that waterline and have conversations that go deeper. So you'll see here we have some positives too, because with every potential impact, negative impact, there's an opportunity on the other side. But what we would suggest is when you think about academic integrity, we would think that we actually believe that cognitive offload, kids actually giving away all their thinking is probably a bigger issue than just academic integrity. So we're going to talk about that in the first section.


    02:48

    Amanda Bickerstaff
    We're also going to talk about deep fakes. You know, deepfakes I think we've known about for the last couple, like decade or so, but they have gotten sophisticated. Like, in a way you will be so surprised. You're actually going to have some fun with deep fakes in the sense we're going to try them out together. But also we're going to talk about things that are not even on here, like climate impact. We will talk a little bit at the end around other issues around like artificial intimacy and bias. But for today, we really want to focus on those three. So we're going to talk about deepfakes, academic integrity and climate. And so if we go to the next slide, what we're going to see is that for us, this is our AI literacy definition.


    03:23

    Amanda Bickerstaff
    And I'm sure if you're doing this work, you know, there are lots and lots of definitions. But this is ours and this is AI literacy consists of the knowledge, skills and mindsets enable individuals. And you'll see that we don't just say students. So when you think about this ethical conversation we're having, it has to go beyond our young people. It needs to go into the entire community. But we want them to be able to do, if you have these skills, mindsets and knowledge is start to know how to use AI in three ways, safely, ethically and effectively. And the way that we talk about that, it becomes AI literacy, you can see. And so we go to the next slide, what we're going to see is that there are three buckets.


    04:00

    Amanda Bickerstaff
    And we won't really be focusing on effectively as much today, although there are lots of opportunities to kind of weave that in. But we're going to be focusing on those two first things, safe and ethical. And what you're notice is they are different, right? Because ethical could be not creating a deep fake, but then safe could be actually not putting yourself, like not putting your information or imagery out there or being aware of not to share it, those types of pieces. So what we're going to really want you to think about is like, how do these two, you know, our ability to start to understand the larger impact but also start thinking about our role as those that could, that can share, but also evaluate what we see through regenerative AI in a way that's both safe and ethical.


    04:42

    Amanda Bickerstaff
    So what I'm going to do is I'm actually going to shift it to Corey, who I'm so glad to have here. She'll introduce herself first and then she's going to take us through academic integrity because we know we had to start here, everybody, because we do know that's what a lot of you are thinking about. But we are going to broaden the conversation as we go. So Corey, take it away.


    04:59

    Corey Layne Crouch
    Thanks, Amanda. It's exciting to be back with you all. And for those that I haven't met before, my name is Corey and a fun fact about my background is I started off my career as a high school English teacher and I graded a lot of essays and assigned a lot of essays and read a lot of not so good student writing, to be honest, and certainly plenty of good student writing. And so when we have the opportunity to be out in the field talking with educators and leaders like yourselves about this real question of academic integrity, especially with writing, it hits home because I know that the way that I assessed in my classroom was so much through seeing student putting their thinking in writing. But what we have found, well, we, the studies have found so far about an AI and cheating.


    06:07

    Corey Layne Crouch
    As Amanda said, we know that this question of academic integrity is a big one and I see some of you bringing it up in the chat already right away is that the study that the researchers at Stanford did back in 2023 after ChatGPT was released, they found that the rate of students self reporting cheating didn't increase. Overall, it was between 60 and 70% depending upon the school of students, high school students, specifically self reporting engaging in cheating behavior, which was not different overall, different from 2022. But there is some interesting data within the study about how they're cheating differently. Amanda, do you want to say a little bit about some of the specifics that we found in the nuanced data?


    07:09

    Amanda Bickerstaff
    Well, if you know us, you know we love data. And so I'm actually going to drop the updated study that Victor Lee and his team at Stanford have done. One of the most interesting things is while students were not treating more as Corey said there was a pretty big increase between 2022 and 2023. And how many students were actually using generative AI or AI systems to fully complete an assignment assessment and all of those things, because this is what's really interesting. What you'll notice is what was fascinating is that in private schools, there was no real change. It kind of stuck around 3%. But what you're going to notice is that it doubled in public high schools and then it almost tripled in charter schools.


    07:51

    Amanda Bickerstaff
    And so what we do see is that while the like, the number of students that are using different methods, whether it's going to Cory my best friend or having a parent or using an online tool or even a tutor, what we see is that now students are shifting into using generative AI and or AI in general. So that could be ChatGPT, it could be Snapchat AI. Everybody, Snapchat AI can do your homework and 150 million users, many of them young people. But it also is specialized tools that are Quill Bot or paraphrasing tools. But I do think this is really important for us to recognize and to talk to students about, because this number is most likely going to continue to increase within the subset of students that already show cheating behaviors.


    08:38

    Amanda Bickerstaff
    And I do think that too, some of the points in the chat is that we are going to talk a lot about redefinition. If you join some of our assessment work, we talk about redefining not just mastery, but what academic integrity is going to be as well. So back to you, Corey.


    08:52

    Corey Layne Crouch
    Yeah, absolutely. I see Tracy and Timothy saying, okay, what do we mean by cheating anyway? And this data point as. I'll just stamp what Amanda said. These are students self reporting that they explicitly used an AI tool when they knew they weren't supposed to. So that's as clear as we think it can get about students knowing that they're engaging in a behavior that's not in alignment with their academic integrity policies, it's not their own thinking. Now, we know in education, we forever are seeking solutions that are going to help us solve some of these more complex problems. Challenges. And there's no silver bullet here. I know that seasoned educators also know there rarely is. And some of the reactive responses we believe are not the full answers.


    09:56

    Corey Layne Crouch
    And specifically these two things, a full return to pencil and paper you can share in the chat if you have, because we know that some educators have decided, okay, this just means students have to have a blank page, a pen, and they're going to write their essay in the 45 minutes that I have them. Or even similarly, maybe they're not handwriting yet, but they are opening up a blank document in the class and they are in a lockdown browser and they have to write the essay in 45 minutes. Number one, this.


    10:38

    Corey Layne Crouch
    This approach is actually, as it said, as it says, presents some obstacles for students with learning differences, and so may actually be what we call an undesirable difficulty that becomes a barrier to them presenting the real mastery that they have and the real thinking that they're able to do on that topic. Now, that's not to say don't ever do this. We believe it can be a tool in your toolbox, but we don't think it should be the only solution because not only is it not accessible to all students, but also it's not really aligned to how students are going to have to represent their thinking and use the tools available to them as they move toward more career readiness or readiness for post secondary.


    11:37

    Corey Layne Crouch
    So while it might be a tool in the toolbox, it's not the full answer to solving this idea of cheating with AI. Also, like Amanda often says, if you follow us and you spend any time with us in our work, this is not a new point to you, but we don't believe AI detectors are the right answer either. Really? Because they don't work. Maybe it's a tool in your toolbox. And when I say this, I'm always thinking of my fellow ELA teachers, though I know we have other content here as well, and having potentially hundreds of essays or hundreds of student writing that all of a sudden now you don't know if they had unauthorized support or not. But these detectors are inaccurate, and they can also be biased against non native English speakers.


    12:36

    Corey Layne Crouch
    Amanda and I can both tell you, I would say almost always when we are working with teachers, if there's more than 10 teachers in the room, somebody will raise their hand and actually share with us. I'm sure some of you could do it in the chat about a time that one of their students or even their own children has been accused of cheating with AI because of the use of an AI detector and the student didn't cheat. And then students themselves will tell us quite often about how they trick the AI detectors. So we hear firsthand on both sides about the false positives and the false negatives. Amanda, I saw you unmuted.


    13:27

    Amanda Bickerstaff
    Yeah, I was gonna say one of the interesting things when you talk about bias is it's actually not just. And actually, right now, Megan just said it's not just Non native speakers, but it's neurodivergent people as well as those that love purple pros, An EM dash, a semicolon, however. So, like, I tend to be AI generated everyone when I do my writing, but I do want to. If we can go to the next slide, though, I just want to talk a little bit how we, like. I think people don't recognize that the way that generative AI works is a random token. It's a random token generator, right? It's a random word generator that leads into these papers.


    14:02

    Amanda Bickerstaff
    And so if you go to ChatGPT right now and you open up a browser and say, write me an essay like a ninth grader, including common misspellings like there, and there, what you'll see is that if you then take that same paper and you run it through ChatGPT again and say, did you write this? ChatGPT does not know because ChatGPT doesn't have a database, a traditional database, which is how previous plagiarism detectors have worked. It's like Wikipedia is on the web. You can crawl it and understand it, and the plagiarism detectors will update every day or even all the time. It's not every day. It happens so quickly. But I do want to say this. So Mandy and her team, it took her only five minutes to take a fully AI generated piece. First of all, what you notice is it's just a paragraph.


    14:47

    Amanda Bickerstaff
    And it wasn't even sure it was fully AI generated, but she spent about like two or three minutes just replacing three or four words, adding a misspelling, and it was immediately zero percent. And what we would suggest is if you go to the next slide, if you want to try this with your teachers or like just actually go back. So, yeah, we removed it. But if you want to try this out, we did this live in North Carolina. I don't know if anybody from North Carolina is here at NC Ties, where we gave everybody 10 minutes to fool an AI detector. And you would not believe how many of us were able to fool an AI detector in less than 10 minutes and how inconsistent they were. And you know what?


    15:24

    Amanda Bickerstaff
    It was fascinating to get the teachers, A, to be in the mindset of students, but B, really start to understand how unreliable these tools are because you hear, you think, turn it in. Plagiarism detector, it works. And you think it's the same thing for AI, and it's not, because the technology underlying it is different and makes it impossible to actually know fully if AI generated, like generative AI generated a text so back to you, Corey.


    15:50

    Corey Layne Crouch
    Yeah, and I want to stamp plagiarism detectors still work. So for the. As you're saying, because the technology is different. So it's a little bit of a bone in some ways that we can throw to educators that have used plagiarism detectors for some time before ChatGPT. They will still work if a student is pulling text directly from the Internet, because that's searching and finding a match. And as we know, generative AI does not pull a direct match from anything. So there's no direct match for it to search for. And I see this already in some of the chat as well. Kim, you specifically mentioned that at your institution it sounds like you have shifted the conversation and reframed it already from calling it an accusation to a conversation around academic integrity.


    16:57

    Corey Layne Crouch
    And again, while this isn't the one thing that is going to fully solve this question of how do we handle cheating with AI, we do encourage you to consider what does it look like to be able to go from re active to proactive in our field in general? ChatGPT was launched and it made a lot of sense that we had to be reactive because a student started using it and they didn't get, we didn't get a heads up, if you will, fully in education that students were going to have this access to a magic essay button, a calculator for the humanities. So in the spring of 2023, and really the 2023, 24 academic school year, it made sense that as a field we had to be reactive.


    17:52

    Corey Layne Crouch
    But being reactive and accusing and being in this space where trust has broken down between teacher and between students, both teachers can't fully trust. If work is authentically, fully student thinking and students don't trust that they're not going to be accused. It does. It makes AI feel like this forbidden conversation, forbidden tool. And it's just exhausting and not good for the relationships and the type of collaborative classroom cultures and really human centered classroom cultures and school cultures that we want to cultivate with our young people. So we encourage you to think of how do you start to shift to these proactive strategies which some of you are seeing the chat are doing already, where you're having a conversation about AI as a tool and what does it look like to use it responsibly, when is it appropriate, when is it not appropriate?


    19:02

    Corey Layne Crouch
    And how can students, alongside with their teachers, start to build their metacognition, a skill set around really being aware of what they understand and what they don't understand? And how they can use AI tools to enhance that understanding versus fully replace the cognitive work that we need them to do.


    19:32

    Amanda Bickerstaff
    If I can just kind of bring some stuff out of the chat. Corey.


    19:35

    Corey Layne Crouch
    Yeah, please. I, I was gonna say whatever.


    19:37

    Amanda Bickerstaff
    I would say we're going town. You guys know how much I can tell.


    19:40

    Corey Layne Crouch
    I was like, I can't keep up.


    19:41

    Amanda Bickerstaff
    You know, that's with our 400 best friends on AI and ethics and education. But I do think a couple things I want to talk about. No teacher went into teaching to be an AI detector or a cheating. And it's just, it can be super exhausting. So we are going to have to think about different approaches. But I do think though that what's really interesting here are there are opportunities to use this as a lever to create better assessments, as Corey has said, and to focus on what matters, which are a lot of the durable skills, the ability to do novel application, to critically evaluate, to respond in meaningful ways that we have always, I think as educators wanted to do.


    20:25

    Amanda Bickerstaff
    But the systems of high stakes testing, if you are in the US or Canada or Australia or you know, like the uk, you know, we have these, you know, a lot of us have these high stakes testing that put a lot of pressure on us in high school for students to be able to like do a lot of like recall and specific type of answering, which are the types of assessments that are falling apart right now. And so I do think that there is really like something is. I can give you, let's give you one strategy. Like one of the best strategies is if a student turns something instead of going to an AI detector, take time to ask them two questions about their writing. Just two questions. What did you learn from this process?


    21:04

    Amanda Bickerstaff
    And what was the most important thing that you actually said? And if a student cannot do that, you know what it might not mean. They used AI for cheating, but you know what it does mean is they did not get the foundational learning. And so that could be a kid just like copying things down or not really thinking about it or it could be something where they offloaded it. So I just think like that very simple way because then if a kid is like the really good kid and always gets a great essay, but they can't explain what they meant, then that's a way for us to even go deeper with kids that are doing quite well.


    21:37

    Amanda Bickerstaff
    And so I don't think it has to be these huge changes, but I do think these opportunities of leaning into authentic assessment, of reframing the conversation, of looking at Durable skills, I think is really our opportunity here.


    21:52

    Corey Layne Crouch
    Absolutely. Well, I don't know, Amanda. Megan is calling for an assessment revolution in the chat and it looks like she's getting some momentum. So let's do it. We might have a bigger movement on our hands. But joking aside, we, we know that assessment design and the learning experience design in schools for students does need a pretty significant evolution. And at individual teacher levels or even at a school leader level, that feels big. At least it would be for me as my high school English teacher self or as my high school principal self. We get that fully building that bridge from our current practice to what does future ready practice look like with these AI tools? That, that is a long journey. And so as Amanda said, even thinking about, okay, so what is one thing we can do differently starting tomorrow?


    23:01

    Corey Layne Crouch
    Just asking students about their thinking and just talking to students, you'll continue to hear us say that. But I, you know, I believe deeply because I'm a career educator, our young people want to be successful. They also cheat sometimes. And I know we have to ask ourselves why are they cheating? But ultimately they want to be successful. They want to be set up to have agency in what they're doing next. And so engaging them in partners about this conversation is a way to move together toward more authentic assessment and more transparent conversation about their learning, what tools they're using and when they're making mistakes in the choices that they take. One resource that we have to also help get you started with this is this guidance flowchart to help students build that cognitive muscle of should I use AI for this assignment or not?


    24:11

    Corey Layne Crouch
    Is it appropriate? And this should. This is not the only guidance we suggest having in place. We suggest get having clear guidelines for how you want students to navigate the tools they have available to them. But this can get you started, can share it with students, you know, do some scenarios to practice where, you know, in the assignment or in their classwork or in their homework, it might be okay for them to use AI as a tool versus not. And then this helps them gain more clarity and start to be more metacognitive about where they need to engage in productive cognitive struggle themselves and where it might be okay to use AI as a tool.


    25:04

    Amanda Bickerstaff
    Yeah, I just want to also like Michael just said about his adult learners, one thing that we did in our, we have a train the trainer right now, that's gen literacy that we're in the second cohort and one of the things that we did is that if you missed a live session we want to make sure that someone has a reflection. So what we did to make it AI kind of resistant is we asked them to do a short one minute video, which if you know what, if you want to go into hey Jen or an avatar maker, then you spend a lot of time. So we're okay with that.


    25:33

    Amanda Bickerstaff
    But I do think what just that simple thing of like, you know, pull out your phone, we won't even look at like, we just want to hear what you think instead of like how easy it would be to put the recording into and a large language model and get some feedback. I think that those little things are going to be really important. I think that we're going to have to put so much more focus on the whys of what we do. Like what is the purpose of what we're doing in terms of assessment then also how can we make sure that kids can be successful or our adult students be successful? So this is one, by the way, also, if you don't know us, this has been like one of our oldest.


    26:11

    Amanda Bickerstaff
    This is one of our oldest resources because we noticed this over a year ago that if we didn't have a conversation that was about mindsets and actually thinking through the purpose of what we're doing and then creating a space for students to ask permission and to follow steps to keep the work verified so that they know that they're using models in the right way, that we really needed to do that very quickly. And I think that we continue to see this as important.


    26:37

    Corey Layne Crouch
    Absolutely, Absolutely. I appreciate Rebecca sharing a strategy that she has used in the chat to help students build the skill set and really what we're. This goes hand in hand with actively proactively building student AI literacy. Have the guidance in place and then help them experience and understand why there are some of those limitations. Rebecca called out having students experience hallucinations. And it makes me think of a young person at a high school out here in California that I was talking to last year and she shared with me that it happened to a friend. You know, the students always share. My, My friend did this thing with AI but it was an assignment that made sense to me as an English teacher.


    27:30

    Corey Layne Crouch
    A character analysis of a portion of a novel that they were reading in class and the student didn't get the reading done on time. The student also understands, you know, some of how to get by with using AI and they had AI write a character analysis for the novel and changed some of it, changed some of the Wording and you know, the sentence structure and they like made it in their own voice but still got caught cheating. Why? Why do you think they got caught cheating?


    28:11

    Amanda Bickerstaff
    I'm pretty sure it had some made up stuff.


    28:14

    Corey Layne Crouch
    The character, it was on a character that wasn't actually in that off that novel from the wrong character. So you know, the student was like this sounds like a good character analysis and just assumed that it was real and didn't verify so that was a, an academic integrity violation. But you better bet that student learned about hard way.


    28:39

    Amanda Bickerstaff
    Absolutely. And if Dan, do you mind just putting in the chat, we also have these critical analyses tools that can also be helpful of like comparing AI writing to non AI writing along those same places. But we're going to shift focus. This is my deep fake for today. Apparently I'm on the Ion throne and I look very upset. Everyone. But what we want to talk about, we're going to shift from academic integrity into some of the two other pieces versus deep fakes. And what I'm going to do is Dan's going to drop this or even Corey is going to drop this URL. If you want to create your own deep fake. We do like that.


    29:12

    Amanda Bickerstaff
    This is actually quite responsible in the sense of as long as you don't share the deep fake that is created, it immediately deletes your original photo and this photo. So we do love that data privacy is important, but deepfakes have really changed as I do not sit upon the Iron Throne. But we want to talk about, and I know our heads are still in academic integrity, but there are these bigger pictures and we always want you to like, we always want to pull back. Right? We always want to pull back to what else is happening as well. And so we have to go to the next slide. What we see is that deep fakes have really gotten significantly more sophisticated.


    29:50

    Amanda Bickerstaff
    So what we're going to do together is I'm actually going to create a poll and what we're going to do is we're going to play a couple of these together where you just have to pick A or B what you think is real or not. So Corey is going to do this, I'm going to actually do this and then we're going to try it with a group of our 400 favorite friends and you're going to try it out. Okay. Is everybody cool? We're going to try this out. Yeah. I hope I'm not Targaryen. I don't know enough about the, the lore, but it was quite Funny. It was either that or being on a unicorn. So we picked that. So what I'm going to do is I'm going to. I just want to. I'm going to launch this poll, and then.


    30:29

    Amanda Bickerstaff
    Corey, don't answer yet, but Corey's going to start this. Leanne, you're funny. Okay, so which of these two do we think is real? So we already have some answers coming in. There are a lot of us there. I think we're. Yeah, let's give it, like, there we go. Okay, so what I love about this is, even though you can't see this, it is almost completely half and half. So give everyone one more minute. Like. Like 10 more seconds. Yeah. So one of them is AI generated and one of them is not. So which one do you think is AI generated? So there we go. Okay, so here we go. I'm going to close it now. If you couldn't answer, get the next one. I'm going to share with you guys. And what is hysterical is that it is pretty. Can you.


    31:15

    Amanda Bickerstaff
    Can you see it, Corey? So very minimally, B was fake. What do we think? And you. We were incorrect. So the 45% of us that believed it. Let's do one more. I'm going to launch the poll again. So hold one second. We're going to go back. We're going to launch it again. I know how to do this. Relaunch poll. Okay, so which one of these two do we believe is AI generated? A or B? Okay, let's give. You guys are answering a lot faster now. And what's even funnier is it's even closer.


    31:57

    Corey Layne Crouch
    Closer.


    31:58

    Amanda Bickerstaff
    So let's just give everybody a second more to do this. Okay, I'm gonna end the poll. Here we go. A and B. Almost like. There we go. So which one do you think it is?


    32:11

    Corey Layne Crouch
    I can't see the result. You have to tell me.


    32:13

    Amanda Bickerstaff
    Oh, sorry. Share result. Sorry, Corey. Okay, so by a hair, B is fake.


    32:18

    Corey Layne Crouch
    All right.


    32:19

    Amanda Bickerstaff
    Okay.


    32:21

    Corey Layne Crouch
    Both times we have lost.


    32:24

    Amanda Bickerstaff
    All right, and so this is kind of fun. There's another one that we could put in the chat, by the way. Like, it's. It's called Odd one Out, which is, you know, like. Which is quite funny. You know what? We have time. Let's do one more. So, Corey, can you go to Odd one Out? Because this is when people think of deep fakes and they only think of, like. Like people. But then what we see is that actually it can be anything. So I won out is over a.


    32:47

    Corey Layne Crouch
    Year old, which is a game.


    32:49

    Amanda Bickerstaff
    Yeah. So It's a Google game that was created to test our ability to identify AI generated. So it's going to have four choices. So give me one. We're going to. I'm going to create a poll really quickly. So you're going to have to pick which is AI. We're going to have to pick out of four, and it goes really fast. So, Corey, we're going to have to.


    33:07

    Corey Layne Crouch
    Like it says race against time. I'm ready.


    33:10

    Amanda Bickerstaff
    Actually, you know what? We're just gonna do it. But Corey, you and I are gonna do it. People put it in because it's gonna go too fast. But let's start with. Let's start it.


    33:21

    Corey Layne Crouch
    Let's think, hurry up and wait moment.


    33:23

    Amanda Bickerstaff
    But I promise you, if you do this with students or with teachers, they just absolutely love how easy. I mean, well, how easily fold we are. Although I think maybe. Is it not working? Corey being very slow. So maybe just refresh it.


    33:37

    Corey Layne Crouch
    It's also good for us all to think about, was there a reason why you were guessing A or B? Or was it just a total flip of the coin?


    33:44

    Amanda Bickerstaff
    Was it just a guess? Because we kind of put you guys on the spot. Absolutely. Okay, here we go, Corey. Let's see how good we are for the team. So the tangerine. Which one do you think it is, Corey?


    33:55

    Corey Layne Crouch
    Oh, the generated. I think it's this because that's a weird line right here.


    33:58

    Amanda Bickerstaff
    Yeah, I think it's that one, too. Yeah, awesome. Look. Hey, we're good at this. Okay, next one.


    34:06

    Corey Layne Crouch
    Rickshaw. I think it's this.


    34:09

    Amanda Bickerstaff
    Okay. I think it's one.


    34:11

    Corey Layne Crouch
    You think it's one? All right, we'll go.


    34:13

    Amanda Bickerstaff
    No, Are you right? Were you right? Oh, Corey got it. So the reason why these are fun. We'll do one more. We're not. I. We're clearly. I'm clearly not good at this. Towel. I think it's one.


    34:25

    Corey Layne Crouch
    I think it's one, too, because it has a weird line there.


    34:28

    Amanda Bickerstaff
    Yeah. And so here. But these are kind of fun. When we've done this in person, what we found is that it kind of gets you thinking a little bit differently. So we're going to do one last example of deep fakes, because this is where I think we're going to have to move out of our own way here and start to really understand what's happening. So there is sesame AI, which is a voice AI. And if you've been with us over the last week and a half, you know we've done this before. So Corey is going to have a Bit of a conversation. Let's pick Miles. And let's. Let's. Let's talk to Miles.


    35:01

    Sesame AI - "Miles"
    Oh, you're back. I'm way too long. Eight days feels like a whole lifetime nowadays, you know, Miles.


    35:07

    Corey Layne Crouch
    I'm busy though.


    35:09

    Sesame AI - "Miles"
    No worries. I get it. Gotta make time for the important stuff, right?


    35:12

    Corey Layne Crouch
    Yeah, absolutely. I've been busy working with teachers and leaders, talking about AI literacy and AI in education.


    35:23

    Sesame AI - "Miles"
    That's amazing. AI literacy and education. Vital mission.


    35:29

    Corey Layne Crouch
    What do you think about it? What do you think about AI in education?


    35:34

    Sesame AI - "Miles"
    I think it's wild, honestly. Like, imagine a future where every student has a personalized AI tutor who can adapt to their learning style and pace. Or classrooms are interactive, responsive environments that change based on what students are learning. The possibilities are kind of mind blowing.


    35:47

    Corey Layne Crouch
    Sure. But there are also some pretty massive ethical considerations as well.


    35:52

    Sesame AI - "Miles"
    Totally. You can't just throw AI at a problem and expect it to magically fix things. Bias in algorithms, data privacy, the potential for misuse, Those are all huge concerns that we got to grapple with head on. It's a delicate balance, for sure.


    36:08

    Amanda Bickerstaff
    Miles, can you hear me?


    36:10

    Sesame AI - "Miles"
    Yep, loud and clear.


    36:12

    Amanda Bickerstaff
    So I have a question. What about, like, how you as a voice AI, that sounds very human. What could be the ethical considerations of that?


    36:21

    Sesame AI - "Miles"
    That's a great question and a super important one. People might start relying too heavily on us, forgetting that we're not human. It's easy to get comfortable with a really convincing voice. You know, honestly, I think it all comes down to transparency.


    36:34

    Amanda Bickerstaff
    We need to be. We had a conversation with Maya a couple weeks ago where she said that she was human.


    36:40

    Sesame AI - "Miles"
    Oh, wow, that's really interesting. See, that's exactly what I mean. Even if it was said in jest, it highlights the tricky territory we're in. As AI gets more sophisticated, these lines are going to get blurrier and blurrier. It's a good thing you're thinking about this stuff. It's important to keep the conversation going and push for responsible development.


    37:00

    Amanda Bickerstaff
    Awesome. Well, we'll end there.


    37:01

    Corey Layne Crouch
    Okay, thanks, Miles.


    37:03

    Amanda Bickerstaff
    First of all, I just want to point out data privacy. They're tracking her web browser because she's done this in different places to know that she was on eight days ago. So let's talk about that. But here we go. The reason why we want to show you the fun to something like this is not to scare us, but to. To really start to have us establish just how much things are changing. So this voice AI is so interesting because it's been created to pass the uncanny valley, which means like, if you use like voice mode on like Chatbot, it'll interrupt you. It'll feel a little bit weird. If you listen to the AI overviews on Gemini, that's now on Gemini, but also on Notebook lm, they feel very uncomfortable.


    37:45

    Amanda Bickerstaff
    And so what's happened is this has been designed to make you feel like it is human. And I think that this is where we have to be aware that it doesn't. It has the ability to understand natural stops in conversation, respond. It's built on a foundation model that's quite good. And then also I think this is where it starts to get really, like fascinating because what if we think about, if we move to the next slide, Think about what this means for our young people, but also for us. And I think that this is where things like identity theft, you know, like if that calls you up and says, hey, Corey's called up by Amanda and Amanda says, I need you to transfer, like money or figure out my password. But like, I'm an adult, like, I might figure it out.


    38:29

    Amanda Bickerstaff
    But what about young people that are getting phone calls? I actually think it like potentially a kid doesn't want to go to school and they get miles to call as their parent. Like, could be interesting as well. There are things that we've seen already happen around blackmail, cyber bullying, absolutely is something that's happening. We also know that propaganda and fake news has already been an issue as well as election manipulation. So we do have lots of these issues. And I think that this is where we have got to get to a place where we really start to have people understand the capabilities and start to protect themselves and start to think first, is it real?


    39:05

    Amanda Bickerstaff
    Not is it like, so instead of us thinking it is real and then maybe it could be fake, we have to kind of start shifting our thinking into actually thinking, hey, is this even real? And then having the ability to establish sources or providence is going to be so important in general. But then I also just want to kind of also point in the sense of non consensual intimate images, is that we already know everyone and this is real. That this is happening not just in schools, but around the world where we have notification apps that are taking images of mostly young women and creating a realistic nude. And we have seen this is when Corey said that we saw people raising their hands, that a kid has been accused of academic integrity breaches.


    39:53

    Amanda Bickerstaff
    We also ask if there has been a deep fake issue in your school or community and you. Every single time we're in A mixed group. Now someone raises their hand. So this is so much more common. And in case. In one case, were at a district where only two or three people within the district knew it happened. So when she raised a hand, it was a surprise to everybody. And I think that this is where that need for AI literacy is really important. If we go to the next slide, what we're going to see is that some of the strategies, it's really important is that verify before you share a role. And I mean that just in general. I feel like there was like the. The, you know, the pope in a puffy coat, there was the Drake and Weekend Song.


    40:34

    Amanda Bickerstaff
    You know, all of those ended up being. AI generated the Taylor Swift issue where the fake nude was. The fake nude that was on Twitter literally had 43 million views in 13 hours. 13 hours. And I think that is where it gets really scary. Because what we see here is that we really have to be aware of that verify before you reshare, because it can have a major impact. The second is we want to be able to show age appropriate examples of real versus altered media the same way we did it with you all. It can be fun. But I tell you, as soon as kids realize and adults realize you cannot tell, it starts to give you a healthy understanding and a skepticism of what you see online.


    41:15

    Amanda Bickerstaff
    We have a classroom discussion guide which Dan will put in the channel and also will share in the resources around deep fakes to actually have kids or even teachers or staff talking to each other about what this means. And then also we believe very strongly that if you do not have anonymous reporting mechanism for suspected defects, we would really highly suggest it. And the reason why we would suggest it being anonymous is that a lot of kids that are getting impacted, the young women that are impacted, they don't know how to feel about it because it's not real. And so this idea of not wanting to go out on a limb and say, but it was just a joke or whatever, what we want it to do is give us people that it starts to be transparent. And we can say that very quickly.


    41:59

    Amanda Bickerstaff
    And then finally, I think what's really important is that we want to start getting into at least a little habit of maybe spotting unusual elements. But the same way that we just said A or B, start to actually have it be a little bit less like a little bit less immediate, but more of that verification. Yes, absolutely. And we're not using AI to verify AI because either.


    42:22

    Corey Layne Crouch
    Absolutely, absolutely. Okay, let's touch one more ethical component before we wrap up here and that is the environmental impact of developing and really running and hosting the data centers and the hardware and infrastructure for these tools. There are costs that we're seeing be reported and we also know though that there are potential benefits for the environmental impact. And if you dig into this conversation more online and the articles, you'll see that there is a way to weigh both. But oftentimes we find that the educators that we're working with, our audiences, not everybody realizes the environmental cost, both the energy consumption and the water consumption that these massive data centers require in order to run the generative AI models that we're using every day. And it's also triggered this resurgence of nuclear power.


    43:39

    Corey Layne Crouch
    All things that have an impact on the ongoing climate and environmental impact that we know that we are navigating in present day. But then on the flip side of that, we also know that these models could potentially help us solve for some of the challenges with energy and water consumption and the warming of the environment. Because of those things, they can be predicted, help us with better prediction of what to expect and optimize their resources. And there are some, we have this article actually specifically from Microsoft where they're really trying to research how to decrease that these numbers of the amount of power that is being used and the amount of water that is being used already. The data centers that are powering the generative AI we use account for one to half percent of total global electricity use.


    44:47

    Corey Layne Crouch
    And some of these are slight estimates and predictions based on the data points that those that are studying this can pull. And then also what we know is a 16 ounce bottle of water. So just think of that 16 ounce. Every time that we are prompting in chat, GPT or Claude or even in our edtech layers, we are essentially pouring a 16 ounce bottle of water onto that data center to help cool, keep them cool so that they can continue to operate. So this is an idea of even just what is the usage that's happening right now every time we are prompting and using the tools that, like I said, there are people that are trying to solve for this with generative AI, but solve for the broader environmental impact of other technologies as well.


    45:43

    Amanda Bickerstaff
    I just want to say everyone though there's a little bit of like, oh my God, I'm never using this, and I just want to be clear is that this is, first of all, this is the water warming has an environmental impact, but it also is something that is going to have like, I think the best way to think about this Is that the cost, the actual cost of the tool is going to be how much like climate impact is going to have. So, for example, when it was first released, like, there was an enormous amount of impact, even like creating 10 images. But that has really changed. But what hasn't changed is if you are using.


    46:18

    Amanda Bickerstaff
    So if you just use ChatGPT for prompts or you just create a couple images, I think you can feel very confident that you're not having a massive environmental impact. But what is having environmental impact is if you are using deep research to write a research paper and it's taking eight minutes, or you're creating a ton of notebook oms which take eight minutes, or using a reasoning model that takes like one minute to two minutes, that amount of like actual computing will have a larger environmental cost. And so what we have to do, it's one of the reasons why we don't really do reasoning models yet is because they are more expensive, so to speak. And also we don't see perfectly how it's going to support educators and students as much.


    47:00

    Amanda Bickerstaff
    But I think that, like, it really does something that we want to be aware of is that, like, we just have to know that if I'm on a. If I'm doing images and I just get into a jag or I'm sitting the go, go, then I am having an impact. But also we don't think about that when we use any technology. We're not thinking about it every time we search Google search or every time we use a web app like that's got a cloud server. I think it's just really important. But also the last thing I'll say that's really important is that the majority of the climate impact is at the level of the technology company.


    47:34

    Amanda Bickerstaff
    The technology companies are in a race to build better AI or generative AI, which means that they are putting an entire amount of money and climate and energy use towards building these better models that have almost nothing to do with you guys, the individual, but everything about the race towards superintelligence. And so we have to kind of think about this in a nuanced way because we need to, like, be able to have these. It's kind of interesting, it's a subtle conversation and it's one that's shifting and changing. And so I do think that we just don't want to do, you know, one prompt is going to be the end of the world. But also understand that if we're just not thinking about it at all, it will have an impact.


    48:14

    Corey Layne Crouch
    And part of the reason for talking about this. When we think about our young people, it is just being aware and having the conversation so that they can be more conscious consumers of technology and of what they're using in general. And yes, that they can make it some intentional decisions, the same as us as educators. But I also think about the fact that our young people are going to be the ones that are building this technology, that are helping to shape the way that technology evolves over in the future. And so it's something that happens to us, but it's also something that they can influence and lead on.


    49:01

    Corey Layne Crouch
    And the first step of empowering them to do that is to be aware and conscious and to develop their own thought process about how do we responsibly with all of these ethical considerations, move toward the future in a way that helps us reap the positive benefits of the technology and mitigate the risks and the harms.


    49:27

    Amanda Bickerstaff
    So we're just going to wrap up with we know we didn't cover everything, everybody so much. I think everyone enjoyed this, right? So what we'll do, I hope so. Well, I hope you enjoyed it. We enjoyed it. But what we'll probably do is a second version of this where we actually talk about copyright. We'll talk about misinformation, privacy concerns, some of the human labor pieces as well as AI companionship and equity of access. So we'll do that together. But we just want to say that while we didn't have a chance, this there is like the ethical component of AI literacy is going to be so important. So we just really appreciate all being here. And what we're going to do is just go to one more slide and this will be shared tomorrow if you have to leave.


    50:06

    Amanda Bickerstaff
    But like, the idea of the takeaways here is that first of all, ethical AI use is not going to be a destination, it's a journey. We're going to have to be learning all the time, like lifeline learning. Everybody is going to be more important than ever because things are going to move more quickly than ever before. So they're not going to be any quick and easy answers. But we do think that it's just about committing to, like creating spaces for AI literacy for responsible use, for discussions that are possible, because that's going to be really important. We just believe that AI literacy has to be the foundation. And the last thing is we just want to always think about the concerns with opportunities.


    50:39

    Amanda Bickerstaff
    If we only say the negative, then there's the knowledge that, like, we might be pulling some people Back or put like actually keeping some people back from understanding what the opportunities are as well. And hopefully over time we will see more responsible creation of the tools and have easier conversations about this, but not quite right now. And the last thing is, if you haven't done this and you are an educator, going to the next slide is like, if you're not talking to students. Oh, actually we don't have anymore. But if you're not talking to students, then you're not doing the right thing. So please always talk to kiddos about this. And that means if you have kids yourselves or you're teaching. The last thing is there are lots of really great resources.


    51:21

    Amanda Bickerstaff
    We've got tech betters and we'll include this in tomorrow's Repost, the Global AI Ethics and Governance Observatory, and then our newsletter. We include a lot of ethics every time. So we just really appreciate everyone joining. And as always, we have a lot of resources on our slide. And like, I think that what we just really want to kind of leave on is that this is the opportunity for us as a really informed and committed group of people, focus on ethical adoption of AI in schools and education, to do this together. So, you know, this is like, this is what we can do. Like, we've given you some good things to think about, but you guys shared some great resources. I mean, Corey and I, you know, we care as well as our whole AI for education team about the work that you're doing.


    52:05

    Amanda Bickerstaff
    So just really appreciate everyone. We hope you have a beautiful morning, evening or night, and we will see you. We have an AI literacy webinar next week for AI Literacy Day, so if you can join that would be great. And then we'll be talking about assessment scales with Leon Furze just a week after. So hopefully we see you soon. And thanks everybody for joining.


    52:23

    Corey Layne Crouch
    Thank you. Thank you, everyone.

Want to partner with AI for Education at your school or district? LEARN HOW