BBC News with CEO Amanda Bickerstaff
As representatives from over 80 countries and major tech firms convened at the Global AI Summit in Paris, BBC News invited AI for Education CEO Amanda Bickerstaff to discuss the critical need for AI literacy and responsible integration in education vs. the simple but potentially myopic outright ban favored by others.
-
00:00
BBC
Let's take you to the situation in Paris. As were mentioning a few moments ago, there's this global AI summit underway. 80 countries are meeting and lots of heads of big tech companies as well, to discuss the future of artificial intelligence, the good stuff, but also maybe some of the issues as well. They're going to discuss the progress of generative AI, which is of course rapidly evolving and becoming increasingly disruptive. In fact, the French president himself posted a video on his social media feeds a little bit earlier on today showing his face digitally stitched onto dancers, people with long hair doing hair tutorials, and the rest. There's also disruption in the world of education as well with AI. The use of generative AI, things like chat GPT to write essays and assessments have provided some opportunities, but also problems in schools and colleges and universities.
00:54
BBC
Some of them think this amounts to plagiarism. Well, let's discuss with two people who have had to think about this issue, AI in Education. Amanda Bickstaff is the co founder and CEO of AI for Education. Amanda is in New York in the US and teaches schools and teachers how to use AI safely. Amanda, welcome to you.
01:12
Amanda Bickerstaff
Hi, I'm glad to be here.
01:14
BBC
And also with us, James Taylor, professor of Philosophy, Religion and Classical Studies at the College of New Jersey, who has completely banned his students from using AI. Is that right, James?
01:25
James Taylor
That's absolutely right, yes. And I'm delighted to be here too.
01:28
BBC
Explain first of all then why you came to that decision.
01:33
James Taylor
I came to the decision for two reasons really. The first was rather unfortunate in that a sizeable number of students were using AI to cheat on writing their papers. So they just enter the prompt into ChatGPT or Claude and then penned in something which wasn't their work, which obviously was a problem. So I wanted to avoid that sort of easy cheating. And I also realized that some of the students were quickly writing their papers and they had a lot of distractions outside class. They weren't really paying attention to their work. So I saw AI cheating as an opportunity to allow students to really connect with academic work through. So I banned AI. We do author writing in class and students now read academic articles in class, identify the arguments in them, work out their own views and their own possessions.
02:25
James Taylor
And I think it's a much more effective way of teaching and getting students to engage with work in a distraction free environment.
02:33
BBC
Amanda, what do you make of that?
02:34
Amanda Bickerstaff
Well, I think that what's really interesting about that is that if you think about the world of K12 and its, you know, that's called a flip craftsman approach, where you do some of the learning at home and you come and you do the assessment live. And so I think it's actually been shown to be a pretty successful approach to learning. I think that what we see though, is that, you know, when I talk to James is that I'd love to be able to see like not just outright banning, but saying, okay, in classroom we're going to really do that deep thinking, that critical work, those Socratic dialogues. But then at home, when you have generative AI, you can actually use this in ways to actually prepare more, to not just cheat or cognitive offload, but actually like help yourself do better in class.
03:15
Amanda Bickerstaff
Because I think that's really an opportunity. And so I think that outright banning is going to be something that could be more harmful than helpful, although it can. I mean, I do think that we're going to have to do so much work in classrooms around the business of assessing, because it is very easy to go home and enter a prompt and get an essay, but at the same time, not every student has access to high quality tutoring or even has that time that if they learn how to use generative AI responsibly, that on demand feedback and support is something that AI is great at.
03:46
BBC
I love the phrase cognitive offload.
03:49
James Taylor
I actually disagree that AI is great at this. So I've had students, I didn't initially ban AI, rather I did what Amanda suggested, have students ban, put in their answers to prompts and questions and have AI critique it. And it turns out AI is really, really bad at this. So it might be good at generating overviews of particular academic discussions, but we already have really good encyclopedia articles and things which are written by philosophy experts. So I don't see AI really performing particularly new or useful role, especially since it just doesn't understand the material. So students can produce a essay, ask AI to critique it. And AI is really bad at doing this.
04:35
Amanda Bickerstaff
I see. I think that's where the need to be able to follow the up to date resources. And so reasoning models like O3 mini or deep seq, which became very popular, are getting better at some of that work. James. So I would say that some of this is moving very quickly. And there's a lot of research and articles about how some of this work is now considered to be novel. And so I think that this is why AI literacy is so important. Like to be able to recognize what it's good at and what's not, but also be able to have the time and Capacity to try out the new stuff and to see when, you know, there's such a speed to which this is developing, which is why, you know, you talked about the, you know, deep fakes, like it's moving very quickly.
05:19
Amanda Bickerstaff
And so preparing students to be able to start to try out, to evaluate, to understand how these new developments are going to impact them, I think is going to be a huge part of what it means to train people for the future.
05:32
James Taylor
Right. I think that there's two questions we might be conflating, and this is where I think philosophical training is wonderful. So I think there's. Should we train students to do good prompting to use AI effectively? I totally agree with you, Amanda. That's an excellent approach. So when I ban AI in philosophy classes. But there's also the question of is AI good at this and can students tell? That's a separate issue. And students might be told something by Deep Seq or Claude or next generation Deep Seq, but unless they actually have the critical thinking ability initially bent and for subject matter information, they're not really going to be able to assess that.
06:14
BBC
Information, which is the issue we see on social media at the moment, isn't it? With all the information that's on social media, people need the critical thinking to be able to decide and understand what might be real, what might be not, what is coming from a certain perspective, what is genuinely objective and the rest.
06:32
Amanda Bickerstaff
Absolutely.
06:32
James Taylor
Oh, absolutely. And that's where.
06:35
Amanda Bickerstaff
Go ahead, James.
06:36
James Taylor
Yes, sorry. And that's why I think philosophy and critical thinking is absolutely essential. I was going to say critical, but that would be a sort of really bad attempt.
06:46
BBC
Very niche pun.
06:47
James Taylor
And it's absolutely essential. Very niche pun. So philosophy will teach students and others to evaluate arguments on their own merits. So we don't need to know the source of information, we just need to be able to evaluate it. Is the reasoning coherent? Does this pass the smell test? So I think critical thinking in the age of social media and AI and deep fakes, absolutely essential. I foresee a wonderful golden age for philosophy.
07:16
BBC
This is all relevant, though, to the year 2025 and the AI that we have at the moment, don't we? But as both of you two will understand, we are very much in the foothills at the moment. And if at this moment in time we're discussing, well, is it okay for students to try and get some AI chatbot to write me a whole essay on philosophy? Or at the other end of the spectrum, maybe just give me an outline of some pointers I might want to go through in my essay of philosophy that I could then write around, that's one thing. But the rate of development surely means that the tools available could be even more powerful, maybe even more problematic. If you worried about plagiarism and cheating, do you want to, do you worry about that?
07:58
Amanda Bickerstaff
Amanda yeah, I think that this is why we believe it's so important to create informed thinkers right now. So that critical thinking, I think even just basic evaluative capabilities are going to become so important. Because these technologies, like if you watch the tech news or I'm sure you guys talk about all the time, is that there is a clear belief by the tech companies that are developing these tools that artificial general intelligence or super intelligence is possible, and not just possible, but within our reach in the next decade.
08:29
Amanda Bickerstaff
And so that's where we really start to get to where you talk about, you know, when Jaden is talking about the ability to evaluate arguments no matter where they come from, or the quality is going to get really tricky because we know from the, you know, the first ever chat bot did almost nothing but respond to you with what you said with a question mark. And people generally a trusted it and enjoyed it. We tend to believe like our, we believe what we read. I mean, if you have, there's research that shows they have an image that matches text, that we are very gullible for that, which is why misinformation is so rampant right now.
09:04
Amanda Bickerstaff
And with these tools becoming even more just, absolutely just complex and capable and also being released by tech companies without any real understanding or goal for what it will do to our, you know, our schools, our workforce, et cetera. I think it's going to require us to have a really big kind of conversation with all stakeholders about how do we prepare for that future and how do we build capacity in places that we haven't traditionally been very good at.
09:38
BBC
James. I think we may have lost. James.
09:45
Amanda Bickerstaff
Oh no. James, come back.
09:47
BBC
The technology for all its uses has failed us on this occasion. AMANDA then just finally with you, it sounds like you don't have much hope. Hope or do, I don't know, in terms of the either national frameworks in the US or the global frameworks around the world to actually try and understand and put some guardrails around all of this.
10:08
Amanda Bickerstaff
I think that there is the speed to which it's happening makes it very tricky, Right. Regulation is a blunt instrument. It takes a lot of time, and it actually doesn't go at the speed to which this technology is developing. I mean, we're seeing this happen so quickly that I think that there's an enormous opportunity. I'm very optimistic that if we come together and really try to do this with tech companies, with regulators, with, in this case, schools and systems, that we have a real shot at, you know, having this be supportive of our young people. But I think that it's going to require a real understanding that we're living through an inflection point.
10:46
Amanda Bickerstaff
And what I see is that because it feels like it's going so quickly and it is that people generally don't feel like they can keep up and there's a bit of resignation. And so what we would hope with AI literacy work, we would love to see AI literacy be the absolute. You know, it's something that's a graduation requirement, something that is going to be, you know, governments are doing that. You know, it's. There's free courses for everyone, parents, teachers, leaders, you know, business people. Because that's our real opportunity here. Never in the history of the world have we had to learn something so quickly as a society.
11:21
Amanda Bickerstaff
And so if we put that effort into that, which is really not going to be that crazy or expensive, I think that is our best shot of coming together as a society and like recognizing a path forward that can be as positive as possible.
11:36
BBC
Which is asking a lot, James, of all of us in education or otherwise.
11:39
James Taylor
I'm just. Yeah, I think that's it. I'm back. I think that having AI literacy graduation is a little bit worrisome. I agree behind the Innovation, and I see that as an innovator, my students are entrepreneurs and they're clearly ahead of me as a regulator simply because they've got a lot of incentive to learn how to use the technology. But I think rather than having AI as a graduation requirement, I would much rather speak students be required to demonstrate critical thinking not just within AI or AI prompts or assessing AI outputs, but just in general. I think with a much more basic approach of a critical thinking requirement rather than an AI requirement will be much more effective.
12:30
BBC
Okay, thank you both. AI clearly coming on leaps and bounds. Audio technology, room for improvement, I think. But I think we just about heard you, James, through that line. Thank you. James Taylor, professor of Philosophy, Religion and Classical Studies at the College of New Jersey. And Amanda Bickerstaff, co founder of and the CEO of AI for Education in New York.