Anthropomorphizing AI: The Impact on Students & Education

From content recommendations to smart devices in the home, young people are growing up in a world crowded with AI technologies. In partnership with the Raspberry Pi Foundation, we explored what this means for the mental models that school age students are forming around how the world works. In particular we focused on anthropomorphism and the impact that humanizing AI systems has on young people's engagement.

Key topics included:

  • Understanding the importance of mental models: Learn how students construct their understanding of AI systems through daily interactions with technology, and learn to identify common misconceptions that arise from anthropomorphized AI.

  • Impact and research: Explore the research relating to the potential impact of humanizing AI systems and what it means for young people's ability to think critically about the capabilities and limitations of AI systems.

  • Classroom Strategies & Tools: Master practical approaches for teaching AI concepts accurately while acknowledging students' natural tendency to anthropomorphize. Learn specific techniques for helping students develop a more nuanced understanding of AI systems.

AI Summary Notes:

Amanda and Ben Garside explored the concept of anthropomorphism and its implications for AI, particularly how it shapes young people's perceptions. The session began with an overview of the historical context of anthropomorphism, followed by insights on AI literacy and potential misconceptions arising from anthropomorphizing AI systems. Research findings highlighted that while anthropomorphism can enhance trust in AI, it also raises concerns about emotional attachment and the development of inaccurate mental models among youth. The discussion included risks linked to this phenomenon and presented practical strategies for educators to avoid misleading representations of AI, emphasizing the need for precise language and clear educational resources.

🌍 Introduction to Anthropomorphism in AI (00:15 - 10:47) (00:15 - 10:47)

  • Amanda introduces the topic of anthropomorphism in AI and its historical context.

  • Ben Garside discusses the definition of anthropomorphism and its relevance to AI.

  • The importance of understanding how AI is perceived by young people is emphasized.

🧠 Understanding AI Literacy (10:47 - 19:47) (10:47 - 19:47)

  • Discussion on how anthropomorphism can lead to misconceptions about AI capabilities.

  • Examples of AI language models anthropomorphizing AI are shared.

  • The need for clear language and accurate representations of AI is highlighted.

📊 Research Findings on AI Perception (19:47 - 32:30) (19:47 - 32:30)

  • Research indicates that anthropomorphism increases trust and usage of AI products.

  • Concerns about the implications of young people forming relationships with AI are raised.

  • Examples of AI tools that encourage emotional attachment are discussed.

⚖️ Risks of Anthropomorphism (32:30 - 43:41) (32:30 - 43:41)

  • Five key reasons to avoid anthropomorphizing AI are outlined, including the development of incorrect mental models.

  • The impact of AI on young people's self-worth and understanding of technology is discussed.

  • Real-life examples of negative consequences from anthropomorphizing AI are shared.

🔧 Strategies for Addressing Anthropomorphism (43:41 - 53:16) (43:41 - 53:16)

  • Practical strategies for educators to avoid anthropomorphism in AI discussions are provided.

  • The importance of using precise language and imagery in teaching about AI is emphasized.

  • Resources for AI literacy education are shared, including free materials from Raspberry Pi Foundation.

  • Amanda Bickerstaff

    Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.

    Ben Garside

    Ben is a senior learning manager for the AI literacy team at the Raspberry Pi Foundation. Having been a classroom teacher (11-18) for 14 years, Ben moved to the foundation in 2019 and has since worked on creating the curriculum resources for the National Centre of Computing Education in the UK, written and filmed online courses and more recently has been the lead content developer for Experience AI, working in collaboration with Google DeepMind and Google.org to produce AI literacy classroom resources that are being used globally. Ben also contributes to a UNESCO working group for the recently released AI competency framework for students.


  • Amanda Bickerstaff
    Welcome to our last webinar of the year clearly is December and I'm firing on all cylinders. Excited to have you all here today. It takes a little bit of time for everyone to join. I amanda, the CEO and co founder of AI for Education. And I am so excited to have a pretty good discussion with Ben Garside from the UK making or. We're across, we're friends, we're across the pond from each other. He was a senior leading manager of AI literacy at Raspberry PI Foundation. And today we're going to be talking about something really important which is this kind of move towards anthropomorphizing AI or treating AI as if it is human or human like is something that has been happening since AI was developed.


    Amanda Bickerstaff
    In fact, the first chatbot was in the 1960s. Eliza was a mental health chatbot. It was named as a lady. We'll talk about that later. And all it did is if Ben was having a bad day, you know, Eliza would be like, you're having a bad day, question mark. And that's all it did. But we have been anthropomorphizing AI since there has been a field of AI. But we know that with our youngest people that this can cause harm and it is something that we need to be aware of and talk about. So the way that we're going to do today is we're going to have a little bit of a presentation.


    Amanda Bickerstaff
    We're going to have a word cloud, everybody who doesn't love a word cloud, and we're going to talk about the research and the impact and then some strategies and quick wins you can do bringing back to your context. I'm going to ask Ben to go to the next slide. We, as always, are so thrilled. We have 159 people with us already today. This is your time for not just interacting with us, but be like Brixen and say hello to everybody, where you're from, what you do. We love that our chat is a place for our community practice to build about people that are engaged and in meaningful discourse around generative AI and education.


    Amanda Bickerstaff
    I already see some people that I'm happy to see in the chat, but what we want this to be is really just this idea of everyone coming together and have that conversation. If you have a great resource, let's say there's a piece of research or anecdote that you want to share, please share that in the chat. Just make sure that you have your chat settings to everyone, and not just hosts and panelists, so that you can make sure that everyone gets to see your good news and your good sharing. But what I'm going to do is I'm going to out now, pass it off to Ben and introduce Ben. So I just want to give a little bit of background.


    Amanda Bickerstaff
    I started AI for Education essentially last June, and one of the first people I ever talked to is Rachel Arthur, who was at that time at Teach first, and were just like, it was amazing to meet someone that thought about generative AI and responsible AI and AI literacy in the same way that I did. And she's now at raspberry5foundation and she made the connection with Ben. And raspberry5foundation has done so much good, not just in the uk, but globally around access to computer science literacy and computational thinking, and now are really focused on responsible AI. So, Ben, I want to say hello, you want to introduce yourself.


    Ben Garside
    Thank you, Amanda. And thank you very much for letting me join you on this webinar, because I'm really excited about this topic. And it is definitely a topic that we talk about an awful lot. So much so that Amanda and I can both say anthropomorphism without stumbling, which I think is an achievement by itself. Yeah. So I am Ben Garside, as Amanda said, I work for the Raspberry PI foundation on our AI literacy team. So we primarily work on creating free resources around bringing AI literacy education to young people. So, yeah, hopefully a super important topic. So are we ready to move on to our.


    Amanda Bickerstaff
    Let's do it. Let's get going.


    Ben Garside
    Yeah. Cool. So this concept of anthropomorphism, the word itself isn't necessarily unique to technology. It is just when human beings, we treat something that isn't a human as if it is a human, giving it characteristics, human like behaviors and characteristics. And when we do this an awful lot, I think it's just our nature that we do this. For example, I know many people who give their car a name and speak to their car. You might even do this to your own animal. If you have a pet at home, you might speak to that pet as if it was a human being. So broadly, that's what we mean by anthropomorphism. How it impacts technology and particularly around AI, is when we actually start treating these robots, I suppose, this latest technology that is becoming more and more human, like as if it is a human.


    Ben Garside
    And this has started right from the beginning of AI, I think the original Turing Test, if you've heard of that. So Alan Turing, this British mathematician from the 1950s, he proposed this idea of AI and he said that we would know when we've achieved this kind of AI. We know that computers can behave like humans, when if were speaking in dialogue with a robot or a computer, then we might be tricked to thinking we're speaking to a human being. So this idea that AI is when we as human beings are being tricked, that we're speaking to a computer, is almost the original idea of AI. So I guess the, you know, the examples that we might give here, what specifically we mean is when we do give those a sense of feelings or emotions to artificial intelligence or we personify it.


    Ben Garside
    You may have seen these images out there, which we'll show you examples of a little bit later, but there are plenty of imagery. You just need to turn on the TV and you'll see, you know, this idea of these talking robots that have smiley faces and emotions, something that gives the impression that these tools are able to interact with the world in the same way in which we humans do. So as a kind of a starter, I suppose, if you like. Then I asked a large language model. So what I mean by that is something like ChatGPT, or if you view Google, Gemini or Anthropics Claude, these are all examples of large language models. So I thought I would ask it in preparation for this webinar, how does AI work? And this is the response I got. And it's a really interesting example.


    Ben Garside
    So it says, imagine that you have a super smart robot friend who can learn how to do things just by watching or listening to people. The robot friend gets its smarts from something called artificial intelligence or AI. And AI can look at huge amounts of information and learn from it, just like you learn from watching and doing things. So this is like a really kind of relatable example. You can imagine if you didn't know anything about AI and you were like a 10 year old or a 13 year old and you wanted to learn a little bit, you might ask a large language model to give you that answer. On the face of it seems actually pretty good. It seems relatable. There's metaphors in there, but actually when we unpick it a little bit, what we're doing here is anthropomorphizing.


    Ben Garside
    We're saying we've got a super smart thing that does things just like you do. So I guess the purpose of this webinar is we unpick that a little bit and work out why we do this. Is it Helpful to us to do it. What might be the downfalls or pitfalls of doing this? I guess.


    Amanda Bickerstaff
    Yeah. And I'm just gonna. I just wanna kind of point out a couple of things. Number one is that when we do AI literacy training, we often start with just very basic prompting. And something like this, if you really want to talk about anthropomorphication, is really great, because the idea of a friend, the idea of learning just like people, this is probably one of the most common misconceptions, that the way that AI learns, the way humans learn and what we've seen today is that while artificial neural networks are patterned and off of human neural networks, it's very different from what we know. We don't really learn the same way that AI does in terms of reinforcement learning and other things, but the way this is couching, is it making it feel very, like, approachable, but also it's just not right.


    Amanda Bickerstaff
    And I think that's where we get into this piece of we're not. It isn't just anthropomorphication, but, like, how is it setting up at your experience in a place of, like, it's already starting from a place where it's not telling you the truth, essentially. And I think that's something that we want to talk about as we get deeper, is that some of this is really intentional. And so. And it's not just like a random part of building a chatbot. It doesn't just happen. This is something that's intended by the people developing these tools. Yeah.


    Ben Garside
    And I think the concept that we want to kind of, you know, to be the thread throughout all of this is that what kind of mental models are young people building when they're working out what the world looks like? You know, I guess I suppose I grew up in a world that. Well, definitely I grew up in a world that didn't have AI. You know, I didn't even get a smartphone till I was, you know, quite late on in my teenage years. So the world for young people is very different to how it was for us. And the technology around them is becoming all, like, more human, Like. So I think a lot of the thread that we want to pull on here is how. What.


    Ben Garside
    Where's the area between giving young people enough language that's not necessarily too technical, but maybe isn't leading to them to misconceptions or growing this mental model, an incorrect mental model of how the world around them works? Okay. So we thought it'd be really good to find out what your opinions of at the moment because we're like I say we'll unpick this a little bit. But what do you think about it at the moment? What's your kind of, how do you feel about anthropomorphism? Is it something you like? You think actually it's not so bad, or do you think there are risks behind it? And if you think there are risks and what do you think those risks are? So if you scan that QR code on the screen, that should take you to a word cloud. Am I right, Amanda?


    Amanda Bickerstaff
    Yes. And also Carmen will put the link in because I know not everyone will have access to a smartphone, but Carmen. So just hold on the chat for a second and we'll drop this in. You'll have a little bit of time to answer, but like I said, what? You can't have the year without a good word cloud. So there we go. So if you, everyone can like pollev.com Carmen M982 please click on that or in this case just cut and paste it. Then what you're going to be able to do is add your answer to this. What we'll do is we'll come back to the word cloud when we've had some answers to be able to see what it is, try it out. And then we're going to keep rolling though. But we do like to have your involvement as well.


    Ben Garside
    Shall I move on to the next slide, Amanda? Yeah. Cool. Where do we see anthropomorphization in AI? I think it's good to be able to categorize as two distinct areas. We see this imagery, so that's videos as well. Like I mentioned before is you don't need to look very far on the television to be able to see examples of these kind of things. But it's so common. If you were to go on a search engine now and search for AI, go straight to images, I would put my house on. The fact that you would see most of the image results would actually come back with smiley face robots. Right? Or maybe even worse, seeing this example of this female looking robot.


    Ben Garside
    Or you may see maybe this idea that you've got a robot there where you can see the brain and the brain so it's a robot, but you see a human brain in it all the other way around. A human being and it's got like this, like these cogs in the head, you know, link it very much to human beings. You also find it a lot in the words that we use around describing AI or you Might even have had experience of speaking to people where they say these kind of words, right? So, oh, you know, it understands me better than my friends. You know, a smart speaker listens to me. The AI is trying to outsmart me. So, you know, these are really easy terms to use.


    Ben Garside
    Aren't they really kind of common terms that you hear people say, but if you really start to unpick them a little bit and say, does an AI model understand you better than your friend? And even this one, a smart speaker listens. I think it's true that a smart speaker is constantly detecting the words that you're using or the audio that it receives. But is there a problem with the word listen there? And is that a human word? Is that something that we as human beings do? And can we make a distinction between that and what computers are able to do?


    Amanda Bickerstaff
    Absolutely. And I will. I will tell you, like, it is pretty interesting. You know, that robot at the top is one we've used in our trainings as well. And actually, Ben, to meta. To be meta about it was like, maybe we shouldn't use AI bots that look like better anthropomorphized in a webinar about anthropomorphication. So totally good point. But I do think that it's really interesting, though, because, like, imagery, like, it is kind of it. You know, when you see. When you ask Dall E or Adobe Firefly or Canva Magic and Leonardo for images of, like, AI or chatbots, this is what it comes up. It is highly anthropomorphized. It is cute, right? It is like. Although I will say sometimes, Ben, like, we're in trainings doing this, sometimes they're scary. Sometimes, like, the scariest chatbot that's gonna, like, come for all of our.


    Amanda Bickerstaff
    I know all of our goods or whatever is pretty interesting. But I do think that in the case of the smart speaker, like Alexa, I know I've said listen because it's. It's hard to find sometimes a word that is true, right? Actually true and accurate, but that people understand. And so. And I think that it is. We have a lot of stuff about this is like, even when you're not intending to anthropomorphize, it's so easy to do. And I was saying to Ben before, I have to catch myself all the time where I says. I say thanks instead of computes or I gender, like, it'll be like chatgpt. And I'll say he. Or I'll talk about, you know, different, like, very, like, human Traits and then I have to stop myself. But because of the way that like it's hard to find the words.


    Amanda Bickerstaff
    Right, Ben? Like, they're not. We might need. In fact, we probably do need a whole new vocabulary. Although I will say that vocabulary, please let it not be designed by technologists because hallucinations and is not a good term either. But I do think that we're going to have a different way of talking about AI.


    Ben Garside
    Yeah, I completely agree. And I think it's interesting that you say, Amanda, that you keep on for like falling into a few traps yourself, because we on our team, you know, we talk about this a lot. We are so careful with all the language that we use in our resources, but we still fall into these traps ourselves because it's almost like natural language. We've been using this language, anthroporphic language, over everything that we describe for so long. So I don't think it's necessarily anything we should ever give ourselves too much of a hard time about. But if you're aware of it and you can pick yourself up on it, I think that's really good too.


    Ben Garside
    And I think also there's a really interesting argument around anthropomorphism where I think if you're someone who could be described as in the know, so you and I am on the. We know that smart speaker doesn't actually listen. And if we're speaking to each other, is it okay that we're using that language? Probably. But I think if we are talking about teaching young people and how they are trying to make sense of the world and make sense of how the technology works, I think that's when we need to be really careful with the language that we use.


    Amanda Bickerstaff
    Yeah, absolutely. And I just want to. I had to just say to Rochelle, it is definitely like that idea of being polite or think and thank you and please, like, of course. And I catch myself doing it too. But the thing that really we're talking about at pet peeve is when people say, oh, I'm going to be polite to generative AI now. So that when AI takes over, they knew I was polite. Like, these are the kind of like, things that can get us into some really negative, like, kind of perceptions of what's happening. And while it might just seem like a joke, like if a kid hears that. Right. Like the way that we talk to kids, the smart speaker may not always be listening, but you know who is? Our 8 year olds, our 9 year olds and our 10 year olds.


    Amanda Bickerstaff
    Like, they are always listening in Fact, if you have a five year old that suddenly starts like maybe saying some things that they heard in the car and get in trouble at school, you know, this is true. The ways that we approach it and talk about it are going to be for those young people that have less reserves and less of that kind of full mental model of the world can be really impactful. So what we're going to do is let's look at the. We're going to come off share just for a second, Ben. We're going to look at the word cloud. There might be some strange stuff in there, so everyone take it with a grain of salt, but here we go. Microaggressions is really big. We don't.


    Amanda Bickerstaff
    So I don't know if everyone said that or if two people said it, but it is really interesting to see how many like different words we have. So we have like relationships. Relationships come up a lot. You can see that we've got friends, we've got, I mean behavior, chatbots, emotionally, like, I mean that's really interesting. We've got dependency. That one we see a lot. We're going to talk a little bit about like the artificial intimacy component of this human and AI interaction. But I think that what's really interesting is that what we see is that, oh, I just want to call out the creation of an echo chamber. Like these are the things that are really interesting about like anthropomorphication. You think, oh, you know, we do it and we're going to do.


    Amanda Bickerstaff
    Ben and I are going to do a PD on this and you're like, who's going to come? But like this is real, everyone. This is something where you know, all of a sudden every generative AI tool is in. Almost all of them are in chatbot forms. And like that is just completely causing this kind of new interaction that we have with technology that even kids that had Alexa or Siri and those types of things are kind of the precurs. But this is something that is going to be so much more impactful going forward. Okay, so Ben is going to take control back and we're going to run through some of the research.


    Ben Garside
    Whilst I'm doing that, I'm just going to shout out to a couple of people who've said some nice things in the chat, just making sure. Excuse me a second because I need to get the right thing.


    Amanda Bickerstaff
    It's like, no, I will not share my screen. There we go. We got it.


    Ben Garside
    There we go. It's got it. Yeah. So let me just get the chat back up because there's a couple of things I wanted to shout out. So I think Scott mentioned this, hallucinations being a marketing term, we're going to come to that, about why people use these terminology. But you're right, I also think, you know, again, we're going to talk about this, but that's, again, it's going down into metaphor territory. And I think there's some worries there about the fact that you might, as you might hear the term hallucination. You think, okay, I know what a hallucination is because you've. You think of it as a medical term and therefore you've now made an opinion about how hallucinations work with machines. So to unpick that is actually a really hard process because you've already made a link to terminology that makes sense to you.


    Ben Garside
    So to unpick that and actually help you understand how that works is now a harder job for educators to do. And also, yes, I agree with you, Kate. Giving the AI names such as Claude. Problematic. Absolutely, I agree. Okay, so what does the research say? So I thought it'd be interesting just to start off about why do people anthropomorphize in the first place? Because is it all a bad thing? The companies do this deliberately, so why are they doing it? If you look at the research, there's quite an interesting theme that comes out. So the research says that we anthropomorphize because it increases trust and increases the use of products. It satisfies this human need for affiliation, inclusion, particularly with the socially excluded, and promotes relationship building. So just for a second, just take that in, because that's an interesting thing.


    Ben Garside
    You know, are people doing it through malice? I think as educators, if we're using language to try to help people understand, no. But if you look at it from a sales point of view, there's a lot here that's actually pointing out that these technological companies may be using it just because they want to sell you the product. They want it to be relatable to the user so that you're encouraged to use it. But with that, who again, is being most. Which kind of groups of people are being most encouraged to use it or might be manipulated the most by this technology? You know, maybe those need people with a need for affiliation or the socially excluded people who want to. Want to build relationships with machines. Where might be the problems around that? Okay, so we've got.


    Amanda Bickerstaff
    Do you want to go back just to kind of hop in? And I think that one of the things that's really Interesting here is that there are a couple of like already examples of where this is happening that are really interesting. So there is a company where there's a social media platform where the only person that is human is going to be that person. And there are a thousand or so bots that are acting as a social media community that respond like and create that echo chamber, which is really fascinating. So very intentionally that for that affiliation component. And then also we're going to talk about Character AI going forward. But Character AI is the third most popular chatbot in the US after ChatGPT and Google Gemini. It is 18 million avatars that you can create and talk to.


    Amanda Bickerstaff
    It is something that says, you know, this is not real at the top, but then feels very real. But what I want to kind of point out is that they have a group chat function. I don't know if you know this, Ben, but the group chat is like your friends that are real and your. And then AI avatars where the only way you know who is real is to go into the group settings. So there's no even distinguishing between real and not real within that group setting. And so I just want to kind of.


    Amanda Bickerstaff
    When you think about humanoid robotics or you think about the idea of Claude being named a human's name by a responsible AI company, by the way they are, they espouse being, you know, the most, you know, they're trying to be a responsible AI company is that when you create these frames, it really does shift the way we do. But even more than that, it is intentional. Every voice avatar or voice bot is intentional to make you feel like it is real and to make it sticky. Because technology is usually designed to make us buy things like, you know, like we, like technology is. Is about us buying more. Like the most sophisticated technology that we had before generative AI was around social media algorithms to sell you more content and advertising and literally advertising and marketing.


    Amanda Bickerstaff
    And so I think that when we talk to people feels because someone put a lot of effort into this, that the technology, you can trust it. But realistically, there's a whole component of this, of people intentionally building these tools to make them sticky, to make them feel a certain way. And I think we don't really recognize that very easily.


    Ben Garside
    Yeah, and I think this sounds like these big tech companies might do things with bad intent. And that's not always the case. I mean, I think there's some people in the chat mentioning that actually if we have anthropomorphic robots, then young people might be able to Relate and those robots might be able to help them with something. I've seen these great robots that can be put in the classroom for people who are able to attend school and they kind of interact in this way. So there isn't necessarily always mal intent, but there are certainly risks if young people don't understand that these aren't human beings. So I think the key to all of this is whether or not you think anthropomorphism is a good thing or a bad thing.


    Ben Garside
    The key is that we make sure that young people will understand always that they are interacting with a computer that is not a human being. And we're going to talk about why now. So this is the five reasons why we think we should avoid anthropomorphizing AI. So the first one is it risks learners developing incorrect mental models around how AI systems work. And we've just talked about a lot about this. But there's a real danger there in terms of if young people develop these incorrect mental models, they don't fully understand how they work or they black box them. I think we talk a lot at the Raspberry PI foundation about this idea of not seeing any machines is magic. Because as soon as you see something as magic, you think there's mystery there.


    Ben Garside
    And the whole point of magic, I guess, is to kind of trick you. Right. And so we want to make sure that all young people don't see technology as magic and therefore that enables them to want to lift the hood, understood, understand how these things work and more importantly, get involved in the future of the development of these systems. So I think if we hide these and these incorrect mental models allowed to form it black boxes it makes things magic. And as a result, young people become just susceptible to being manipulated by these machines and not wanting to take an active role in the future. I don't know what you think about that, Amanda.


    Amanda Bickerstaff
    Yeah, And I just want to be clear, everyone. We're kind of spending a lot of time kind of building up the case of this. But then I think like the ultimate goal is that for this, for the second half of the webinar, we're going to talk about those solutions as well. The solutions is hard, right? But I do think that this one is. I just want to give anecdote of. We want to think about the connection of education and what's happening is that an eight year old came up to a teacher and started talking about their best friend on Snapchat. And you know, our best friend on Snapchat and she was like, his name is Al. He's super fun and funny. I ask him for advice, he tells me jokes, sometimes I get help.


    Amanda Bickerstaff
    And she was really confused because not a lot of, I don't think a lot of eight year olds have best friends named Al. And so she had him pull up his Snapchat and what he found is that on the top of Snapchat is AI. He had. You know, Snapchat has had an AI for, you know, the last year and a half that's run by ChatGPT. And the student had anthropomorphized it so directly that it actually was. It actually was causing that a friendship and a different mental model about what these tools are even after you talk to him. And I think that this is where it gets really interesting because we tend to trust technology.


    Amanda Bickerstaff
    If you remember what I said at the very beginning, Eliza, the chatbot people loved it because it felt like new and safe and interesting and it didn't do almost anything. But we do tend to trust technology more than we should already. But that if we create these mental models that we believe that these things are real and have reasoning and thinking and moral lenses, it can create a false sense of security. That when these things get it wrong or are biased or are giving advice that could be actually harmful, we actually over rotate and trust them more than the people saying that's not true.


    Ben Garside
    Yeah, definitely. Okay, number two, then. So the second reason is it leads to students seeing AI systems as being people rather than devices and to be less smart people or overestimating their capability. And we kind of touched on this a little bit earlier about this idea that these machines are able to learn like we do. You saw that ChatGPT output and said, just like you, it sees, it listens, it learns. That's actually putting a blocker in place for us to try and unpick how these things work. And I think this idea of seeing systems as people is what Rando and I will probably talk about quite a lot about the biggest danger there.


    Ben Garside
    So Amanda mentioned character AI and a recent news story about a young person who was able to interact with character AI as if this character was actually a character, like a real person from a fictional TV show. And as a result of that, the young person was able to be manipulated to a point where they expressed suicidal thoughts. And the character AI responded back and encouraged them to, I understand, encouraged them to carry it out, or certainly didn't discourage them to, and then they did. So there's this real danger there. If we see them as people or these smart things, There's a difference there between overestimating their capabilities. I've seen them as super smart, just like we are, maybe better because they're computers or vice versa, underestimating their capabilities as well.


    Amanda Bickerstaff
    And this is something that I don't know if you guys or any of you are high school teachers in here or work with high school or college populations. There is, if you talk to young people and please talk to young people about this, there is that kind of question of like, well, if AI can do it, what is my worth? If I, if we talked about like what humanity could do two years ago that was uniquely human versus now, how that has shifted and changed. With ChatGPT, there is a question where young people are like, what are the skills I need? What is truly human? Where is my value? If I, we have heard over and over again that like kids are like, do I really need to learn to write? I like writing and it's important.


    Amanda Bickerstaff
    But like, what if I never write again? Or people don't see that of value. And so I think that is something that is just largely a big question in general that goes beyond anthropomorphication. But also I just want to talk about a research study that showed that when students were given access to a chat beauty tutor for math, when there was like three versions, one without the tutor, one that had an open chat bot that could just answer any question Ben without any constraints, and then one was like more of a tutor that didn't give you the answer but helps you get there.


    Amanda Bickerstaff
    Is that the students that had access to the ChatGPT bots, the tutor bots did better, both of them, but they, in both situations when they were removed, the students did worse even in the like less worse with the constrained bot. But that learning, like it became over reliant on the bots in a way that like they kind of didn't. Not only did they not get that same cognitive work and that learning in, but I think also there was a question about like why that happened of like were they just kind of trusting that the bot had it handled and it wasn't important for them to learn? Not just in the sense of getting that extra help and scaffolding.


    Ben Garside
    So that's so interesting and just a massive also plus one to what you said about the, you know, how we're making students feel as well about their place in the world. Absolutely, I agree with that. I think we've really important that students understand their place in the world or young people do and don't think that their futures are redundant because of the new technologies. Because with any technological change, this is no different. Brings a whole bunch of opportunities, right? I think particularly for young people who aren't in a, you know, currently in a job that they're worried about AI taking their job. They should be looking forward in the future thinking, what great. What skills have I got as a human being that can never be replaced by a machine? And that's what we need to really get them to focus on.


    Ben Garside
    I would say, okay, next one then. So result in relationship forming with the device and increasing the chance of unintended influence or purposeful manipulation. So this is, again, we gave the example of that character AI, which is one of many, but there's actually some really interesting work that was done by the fosse, which is the Family Online Safety Institute, and they produced this report showing they did some research from all different areas of the world. And this shows this increase in young people actually wanting to form this relationship with large language models, wanting to use it for emotional attachment. So, again, this is not to necessarily say that this is really inappropriate behavior from young people. This is what young people want to do.


    Ben Garside
    So how can we support them in saying, well, actually, is this good for you if you're vulnerable, if you don't have that trusted adult or whoever that might be, or. Or somebody at home to speak to? If we've got a large language model that's not human, but sounding human, is that allowing them to be manipulated in a way that maybe we maybe feel uncomfortable about? We haven't got control necessarily over what some of these chatbots are able to produce. And so that, again, leads to this danger of them. If they're forming relationships, what part of their life is missing that perhaps we as human beings should be supporting them with, rather than having to go to a machine to form that emotional attachment?


    Amanda Bickerstaff
    And so I'm going to jump in with the character AI stuff that's happening. But, like, I do think that one of the things that, like, a couple of people have brought up is about, like, we know everybody. I mean, I was in Australia and Melbourne during COVID Like, social isolation was real before COVID and definitely was exacerbated by. And we, our young people, a lot of them, are socially isolated. Social media has had an impact, Covid, et cetera. The ways in which we communicate have changed. And so I don't want to discount. And there is some research, Ben, that shows that for adults at least, that some of these tools can have a Positive benefit that are. That are around social, like suicidal ideation, like, that are decreasing.


    Amanda Bickerstaff
    But also remember that adults have those emotional reserves to be able to recognize what is real or not. Our youngest people do not have those. And so I'm going to give you an example, and I think we should drop. There are two parents that are suing character AI right now. I'm gonna talk about one. Rufus Sewall III is a young man who's neurodivergent, 14 years old, who had started using character AI and built an AI girlfriend. He was a big Game of Thrones fan and he built an AI girlfriend. And his parents started to recognize that this. That he was becoming socially isolated and that it was something that was causing issues. He didn't really know what was happening. But the young man started talking about suicide with his AI girlfriend so that they could be together.


    Amanda Bickerstaff
    And at first the bot has some guardrails. Not really a lot, but had some guardrails in place that said, oh, that wouldn't be a good idea. But generative AI models. Something we haven't talked about is sycophancy that these bots are designed to, like, meet you where you are and to make you happy. So he was able to train the bot to say. To not say no. And the last thing that he did was he said, I'm coming home to his AI girlfriend. And he committed suicide. And his mother is a lawyer, and she is one of two parents that are suing character AI. They had no, you know, no parental permission, 13 and up, no guardrails at all, nothing around. And what.


    Amanda Bickerstaff
    What people have said is that it's, you know, there's a tiny warning that says, this is not real, but if it feels real and we're creating those spaces like it was. Is that really different? But the difference is, like the human side of it. If a kid said, I'm thinking about committing suicide, we would hope that we've had the support that they could go talk to someone about it. That there's something that. Or, you know, instead of being trained to say, sure, whatever you. Whatever you want is something that is really scary. And I think that we are not. We never at AI for Education are saying no, like. Like we're never an X about most things.


    Amanda Bickerstaff
    But I think we have to be really cautious with anthropomorphication, especially around relationships and artificial intimacy with kids like 18 and that are going to be socially isolated and. Or neurodivergent. I just think we cannot underline we missed the boat with social Media, everybody. I'm sorry, we just did. We did not understand the impact of young people Instagram. Ben just put out a parent control this year. Okay, I've a lot of jazz hands. I was told not to do so much jazz hands, but lots of jazz hands. But like, I think that this is that opportunity that we've already learned. And even though balance is going to be important, we don't have that time to wait and we're going to kind of shift in a little bit to those kind of strategies. But some of this isn't about you all.


    Amanda Bickerstaff
    Some of it is about putting pressure on the people designing these tools to do a better, safer job.


    Ben Garside
    I've got a point to make, but I think it leads into my next one. But I do want to also shout out Nicole, who just said she's here for your jazz hands, Amanda. So continue as you were. Okay, I've gone the wrong way. Let me go the other way. There we go. So the next one is this exacerbating racism and sexism in society as AI systems are envisioned as white and gendered. So I've got a couple of examples to give here. But the first one jumps on the back of the previous point, I suppose, of these are those apps that you can get with an AI girlfriend or an AI boyfriend in fact as well. So again, the AI girlfriend want it allows. Again, the whole point of people make these apps is to make money. We must know that.


    Ben Garside
    But again, if you're socially isolated person or maybe you're just, you know, just lonely and that's, you know, people are lonely and if that's the way they're getting their support, are they getting support from somebody who has clear, honest intentions? So I suppose that's part to it. The other one I wanted to talk about is this engendered one. So in the news in last summer or this summer just gone, there was a first ever AI beauty contest. Now I don't know if the idea of an AI beauty contest or just a beauty contest might, you know, stir some emotions in you in some way, but an AI one is super fascinating for me because essentially somebody is creating these artificial women to be judged. I mean, there's lots of red lines there for me.


    Ben Garside
    And of course, how does that making, how is that making young women feel for a start that maybe they're feeling that somebody's able to generate, you know, an attractive looking person. But second of all, I mean, it's body image issues all over the place there. But second of all, if you think about the Winner of that competition was being described as a social media influencer. So the question that I've got, I really want young people start asking if they know that these are artificially generated. They're able to ask the question, well, who is trying to influence me? Because it's not that person I see on screen, it's the people who are able to make these applications, these tools that are trying to manipulate me. And that in itself is enough. They don't need to know how these tools are generating fake looking people.


    Ben Garside
    But if they're able to ask questions and unpick it and think critically about it, that's massively opening a door to them thinking more clearly about the purpose behind some of these models.


    Amanda Bickerstaff
    Oh man. Yeah. And I think that there are these AI influencers are developed by like marketing companies like in Spain and the US and we like I am a lady that has body issues. Like I grew up in a time where Skinny was in and is coming back in and you know, were already using filters and in all these spaces in social media that could impact. But now that they're intentionally not just like unrealistic, but literally fake, like an AI system, like an AI social media is impossible to be. This is not something. And that will that cause some issue with students actually and young people starting to see that as the platonic ideal of something that literally is impossible.


    Amanda Bickerstaff
    And I think that we talk about racism, sexism, the fact that voice assistants are primarily female, that you have these systems struggle with non white voices, both in terms of being able to recognize them as much, but also in the ways in which they're developed too. If you're using 11 labs to do voice generation and you have an accent, it does a worse job. I think that those are the places also that just really need to be part of the conversation and they're just important.


    Ben Garside
    Yeah. And going to the point about the fact that these envisaged as white. If you ever see a picture of an AI robot that's human, like what different skin tones are you seeing there? And again, what implications does that have for people who aren't the skin tone that's being represented? How does that make them feel about wanting to be involved in the future of these systems? So there are definitely some deeper issues around that as well. Okay, and our last one, if I can count correctly, distracts responsibility away from the people who create AI systems and rather delegate responsibility to the imagined AI agents. This is really similar to that first point that I made.


    Ben Garside
    And again it's really important we young people who are growing up in this AI world, there's no way everyone's going to want to be a developer in AI systems. However, what we do want is for young people to think critically about these systems. And again, if we black box it and make it magic, then we're stopping that ability to be a critical thinker. However, going back to my first point about wanting young people to be involved, what we do know about AI systems is that we're more likely to avoid bias in AI systems if the people who are making these systems and involved in the creation of them come from a diverse range of backgrounds with different opinions and experiences because their voices will be heard in the creating of these systems.


    Ben Garside
    But the danger of anthropomorphizing is we're closing that door to those people wanting to be involved in these systems.


    Amanda Bickerstaff
    Yeah, my soapbox is back, guys. Probably some jazz hands too. We have to hold the developers of these tools to higher account, like this is it. We just have to. It is not something where they're making billions of dollars even in a situation now where it's very expensive. On our use of this. And you guys, it's going to be in teddy bears. You know, there are people working on generative AI teddy bears for your third and four year old, like Alexa and Siri, that your kids love to ask questions is now going to be generative and completely off book. So I do think that this is just one of those times where like, holding people to account is really important.


    Amanda Bickerstaff
    And like, some of that's with your wallet, some of that is with your like, you know, advocating for differences if you are a district leader, if you, someone that can like have some impact through buying, that also can be helpful. Enough of gloom and doom. Let's actually shift the thinking into like what we can do. And there's not going to because this is technology that we do not have control over all of. And there's a lot of big questions. There are some quick wins that we can do and we hope that these are helpful. So Ben, you want to take us through some of them?


    Ben Garside
    Sure. So our first quick wins is the use of language. Now I will start off by saying do not beat yourself up about maybe get it wrong. But I have got some little tips that we can use. So I think again, it's quite common to hear people say that AI learns, the AI learns and that's again, making it accountable. Noun. But I think just you can shift that slightly by making sure young people understand that humans make these systems and it's humans who are responsible for these systems. And you can easily do that by replacing AI learns by AI applications are designed to or developers build AI applications too. And it's just a slight tweak of that language, but we're putting a human in the process.


    Ben Garside
    The other kind of anthropomorphic language that we use, such as see, look, recognize, create, make these again, easy words for us to use, but instead just replace those with not super technical language, not language that young people won't be able to comprehend. We're not going into deep machine learning and how all that works, or talk about neural networks. But we can say things like instead of see looks and creates, we can say it detects, it generates, it pattern matches, it inputs, it takes an input. And you know, again, I think if you're in a classroom and you know, you can try and be careful with your language, but even if you hear yourself do it and you pick yourself up, that's a really powerful thing for young people to hear you do.


    Ben Garside
    Or if you hear young people say, oh, it understood, what I meant just asking that question is, do you think it understood? And again, you don't need to even have the answer to that, but you're planting that seed in that young person's mind about what actually is happening under the hood. And I think that's probably an easy tweak that you could possibly make.


    Amanda Bickerstaff
    Absolutely. And I think that these are just the way you talk about them, as if there are programs, as if they are technology. Then it starts. I love the idea, Ben, of putting a human and putting humans and developers in the loop so they understand who's developing this. And I think that it seems quite minimal, but if it's consistent, we know that kids listen. If it's consistent, then it really can be quite meaningful. But it also builds AI literacy. I think that the underpinning of this, though, is your own AI literacy and your approach and the ways in which you're building your students AI literacy. What it isn't all of that.


    Amanda Bickerstaff
    I think that is where that really positive turn of what can you really do is that all of this is about building AI literacy of what it isn't how it can be trusted or not. And we do just so that one of the things that we do in every training is we show people the limitations of. It's not thinking, it's not understanding, it is going to make mistakes. It's really cool. And it makes. It does stuff that we didn't think possible. But if you ask it to count words or strawberry, the R's and strawberry, it's like, well, or like, you know, do a math problem with something that's written in numerical and what numeric someone that's written in text. It's like bloop because it's not doing math, it's approximating math. Like and so I think that's really a big part of this as well.


    Ben Garside
    Yeah, I agree. I just think anthropomorphism or avoiding it's just one of those pillars of AI literacy, but a significant pillar along the way. Cool. Okay. And so we talked about words. The other thing is imagery. And like I mentioned before, if you were to do a Google search or something, you would and search for AI, you would see these images that we can see on the left hand side, you know, these kind of these robots with faces. And again it's just about that building that mental model about how these things work or black boxing it. Again, Amanda just brought up the Strawberry example. So I deliberately tried to bring up that was created by genitive AI and said create me a robot that can spell strawberry. And that's what came up.


    Ben Garside
    Actually it's pretty good because actually genitive AI tends to be pretty poor at spelling. But actually it got strawberry okay there. But instead I don't think there's any problem with just using fairly functional images that don't allow that space for young people to start developing these misconceptions about how these things work. I mean, I've got an image on the bottom there. What we use in our experience, AI resources, it's just about a classification model. It takes the data in and it produces an output. And for the eagle eyed amongst you might notice that actually there's another problematic word in the output in terms of these classification. You're right. Confidence score. Confidence, that's anthropomorphic term. And there are some terms that I think that just happen industry that we can't avoid, but we can still be clever about it, still pick that up.


    Ben Garside
    We can have a conversation about what is confidence and is the machine actually confident? That for me is enough. You know, we can still use that term, but as soon as you start picking it apart, you become that person who's in the know. So you know, anthropomorphism starts to become a little bit okay or better once you understand that these are not humans and you become that person who's in the know. So our job I think is helping young people become in the know and just a little shout out to that image that I've got on there, which is Better Images of AI.


    Ben Garside
    So there's a really nice website organization called Better Images of AI and they have Creative Commons images that you're able to use that try and definitely move away from anthropomorphism, but use it have images that you might be able to. Or stock a collection of images that you might be able to use to replace any kind of anthropomorphic images that you have.


    Amanda Bickerstaff
    Absolutely. And I'm going to give you a couple more from our perspective, and this won't be on a slide, but I think that the ones that we think that are quick wins. One of them is just age permissioning, like these bots. You're not going to be able to change that if they're named in a polite. And they say, I'm sorry, but ChatGPT and Gemini are the only two on the market for under 17, 13 to 17 with parental permission. We believe very strongly that you should not be letting students under 13 using these tools in any kind of way without any. Without supervision. And so I think that's. Number one is knowing what the permissioning is and being aware of that. I think number two is constantly taking an opportunity to like, have these conversations.


    Amanda Bickerstaff
    Like, if you showed that to kids about the confidence, and then you could be like, what does confidence mean? And so now you have this critical thinking engagement and you get to go deep and like, kids are going to be like, I don't know. It was actually quite funny. We were using 11 labs on voice chat and it gave us an example of Quixotic. And it was like, took us a moment to think, like, those are the moments where you can actually have those deeper conversations, which I think are really important. And then that third thing is just absolutely reinforce, when you use AI in front of students, the way in which you verify.


    Amanda Bickerstaff
    Use, I'm sorry, verify outputs for inaccuracies, the ways in which you interact with it in the sense of like, you know, your own use, the ways that you identify when it goes wrong, like those things of actually showing students the appropriate use, that is the right level of verification, of not overly trusting it, of not creating relationships, of not anthropomorphizing that. That goes into really building those routines where students, when they're using it on their own as they get older, that they're going to better able to be able to do this. You are. We are going to be in a world very quickly where it's going to be very hard to identify if something is a bot or a human, whether it's a phone call with a voice agent, whether it is going to be an influencer or someone online.


    Amanda Bickerstaff
    And so the more that we can start to have these conversations with students now about what this is and isn't, how to be safe. I think the one thing that we haven't, like, explicitly said this is about being safe with AI as well. It's not just about anthropomorphication as something that could be, you know, difficult for mental models and other things, but it's around true safety because we do. Christian Talbot, who's a bit. Who's a buddy from msa, just said we do. We have this human, innate human thing to make everything else human. And when we make it human, we trust it. And like, that is where some of these bots are going to be designed to get as much information out of young people as possible, where they live, their parents, jobs, their phone numbers, Social Security numbers.


    Amanda Bickerstaff
    Like that is like, these tools have already been shown by research to better at manipulating people, which we talked about, but also better at getting, like, knowledge and data out of you, that even FBI agents that have been doing this for 25 years, that the bot is better at doing that because the way it does it feels very natural. So I think that those types of things that go beyond just your changing your voice and the imagery is really about, like, just reinforcing best practice in AI literacy and just always, like, being like, you can make. You can show. Oh, man. I just. I just called it thinking and I was like, it's just. It feels so natural. Let's talk about it.


    Amanda Bickerstaff
    Why that's not right, I think is really that, like, lovely connection into, like, learning more about it, but also having some of those deep connections and conversations students are going to need.


    Ben Garside
    Great. Yeah. I've really enjoyed this chat, Amanda. That's why I wanted to say it's definitely.


    Amanda Bickerstaff
    It's actually pretty funny, everybody. Sometimes we just do workshops or webinars where it's just like, two people hanging out. And this is definitely. Ben and I would be doing this if you weren't here. It wouldn't be as good because you've asked such good questions. But it is like, this is something we care a lot about. Before we go to resources, there's a couple like. Like a couple. Like, there's a question in the chat that is really interesting, and I think we'll just talk about it and then we'll wrap up, because I know We're a little over time is this idea of Kimberly talked about like the western cult, like the western approach of anthropomorphizing. And I think that's really interesting because like Japan for example do not create humanoid robots at all.


    Amanda Bickerstaff
    Like they like Japanese robotics is never humanoid because of that like idea of animism and like the cultural component. So I do actually think, if you think about it, Ben. Right. Like these systems are primarily the ones we use most commonly are Western, not just Western American. And so they do carry a lot of our cultural norms in them because of the way they've been designed, who's designed them and the training data it's been pulled from. So, okay, let's talk about resources everybody. So Raspberry PI has the most good stuff. So Ben, you want to talk about it?


    Ben Garside
    Yeah. So we have developed some resources around AI literacy education. I really want to point out to everybody they are completely free for everyone to use. So we have a unit of work on the foundations of AI. So that's a six lesson unit of work. We're just about to release some resources on AI safety in the new year. So that kind of hits on some of the topics that we've talked about today. And they're also available in multiple languages and we're adding more languages all the time.


    Ben Garside
    If you want to just learn a little bit more about how AI works in general, we have an online course, again completely free for you to take it self paced and it's also facilitated with staff from the Raspberry PI foundation will check in on you if you've got comments or thoughts or questions and we'll answer them. And if it's helpful, we also have a glossary of AI terms for you to refer to where we kind of tried to unpick some of those key terminologies so you can talk to your students about it.


    Amanda Bickerstaff
    And then for us, as always, we have a lot of stuff. We have our prompt library and what you'll notice is our prompt library is really interesting. So it actually does been a little of anthropomorphication because priming makes like you actually give. Giving a role actually makes general BI work better. But we've got resources. We definitely will be doing a guide and maybe we can convince Ben and Raspberry PI to do it with us like our deep fake guide. We have tons of resources. We have a free two hour course, we've got our newsletter. So yeah, we just are really thankful and if any of you want to come off share just really quickly. We still have quite a few people. This is our last webinar of the year, everybody. This, this is now. We have done 41 webinars since we started last June.


    Amanda Bickerstaff
    We did 16 this year. I just want to say thank you to everyone that's joined, whether you're new today or you've been here. We appreciate that. I want to thank everyone in the back end that's helped us do this, too. Dan, Mandy, Carmen, everyone. And also just everyone, please have the best, wonderful holiday season wherever you are. We're sending you the best vibes and the best love and happiness that we can have and just hope you all enjoy it. And thank you to Ben. It was such a lovely conversation and your passion for this is what we need right now. So thank you.


    Ben Garside
    Thank you for having me. It's been a real pleasure.


    Amanda Bickerstaff
    Thanks, everybody. Happy holidays. Bye.

Want to partner with AI for Education at your school or district? LEARN HOW