Shaping AI's Future: Policy Perspectives & Education
As artificial intelligence continues to reshape our world, it's crucial to address the serious policy questions it poses, particularly in the field of education. To help address these issues, the Federation of American Scientists (FAS) hosted an AI Legislation Policy Sprint to crowdsource, develop, and publish a set of creative AI-focused ideas that could aid in regulating the use and development of AI technology. The final slate of 15 policy memos were authored by AI experts who applied their experience in critical sectors such as healthcare, R&D, and education to propose targeted legislative policy solutions, with the theme of AI in education making up the largest share of ideas.
In or latest webinar, we delved into these diverse and forward-thinking perspectives with a panel of experts that discussed and guided us through these innovative policy solutions that could shape the future of AI in education and beyond.
Key Topics Included:
Highlights of the major concepts emerging from the FAS AI Legislation Policy Sprint
The goals and methodology behind this unique crowdsourcing initiative
Challenges and limitations identified in current AI policy, particularly in education
Actionable steps for policymakers, educators, and other stakeholders
AI Summary Notes:
📊 AI Policy Sprint Overview (10:34 - 19:16)
15 policy proposals developed over 2-3 months
Proposals focused on AI innovation, trust and safety, healthcare, and education
Education proposals addressed:
Need for more research on AI's impact on education
Guidance for schools and teachers in navigating AI landscape
Upskilling and professional development for teachers
Nicole Fuller from National Center for Learning Disabilities (NCLD) presents proposal
NCLD focuses on students with disabilities and technology's impact
Proposal addresses AI-powered student online activity monitoring software
🔍 NCLD Policy Proposal (19:16 - 30:51)
Concerns about disproportionate impact on students with disabilities
Four-pronged approach proposed:
Federal data collection on technology's effects on students
Parental notification about tools used and potential risks
Office for Civil Rights to investigate discrimination complaints
Technical assistance for state and local adoption of safe, equitable technology
🧠 AI Research Accelerator Proposal (30:51 - 40:22)
Dr. Anastasia Betts presents proposal for AI and education research accelerator
Addresses mismatch between pace of AI innovation and educational research
Proposes consortium of stakeholders to develop accelerated efficacy research methods
Aims to reduce time for conducting research and bureaucratic processes
📚 AI Literacy for Teachers Proposal (40:22 - 50:45)
Mandy DePriest from AI for Education presents proposal for AI literacy program
Focuses on preparing teachers to build AI literacy for students
Proposes dedicated program within National Science Foundation for ongoing AI literacy training
Includes development of AI literacy standards and resource hub for teachers
💡 Practical Implementation and Lessons Learned (50:46 - 57:28)
Panelists discuss lessons learned from policy proposal process
Emphasis on importance of evidence in implementing AI solutions in schools
Advice for school leaders: demand better evidence from solution providers
Encouragement for schools to volunteer for research partnerships
Discussion on implementing ideas at local and state levels
-
Federation of American Scientists (FAS)
AI for Education's Proposal
Nicole Fuller's Proposal
Dr. Betts' Proposal
-
Amanda Bickerstaff
Amanda is the Founder and CEO of AI for Education. A former high school science teacher and EdTech executive with over 20 years of experience in the education sector, she has a deep understanding of the challenges and opportunities that AI can offer. She is a frequent consultant, speaker, and writer on the topic of AI in education, leading workshops and professional learning across both K12 and Higher Ed. Amanda is committed to helping schools and teachers maximize their potential through the ethical and equitable adoption of AI.
Dr. Anastasia L. Betts
Dr. Betts is a leading expert in early childhood education, learning sciences, and AI-powered educational technologies. As the Founder and Executive Director of Learnology Labs, she leads cutting-edge research on the impact of AI on learning outcomes, equity, and ethics in education. Dr. Betts holds a Ph.D. in Curriculum, Instruction, & the Science of Learning from the University at Buffalo, SUNY, where her research focused on critical factors in parent-child mathematics engagement. With over two decades of experience in education, she has published extensively in the area of early childhood, developed numerous award-winning learning products, and is named on three U.S. patents for AI-embedded personalized learning technologies.
Dr. Betts is committed to leveraging technology to create more inclusive and accessible learning environments for all children. She is the editor of the "Handbook of Research for Innovative Approaches to Early Childhood Education and Kindergarten Readiness" and has been nominated for the American Educational Research Association (AERA) Karen King Future Leader Award. Her current work focuses on developing evidence-based frameworks and tools to support the responsible integration of AI in educational settings, with a particular emphasis on early childhood education.
Nicole Fuller
Ms. Fuller is a Policy Manager at the National Center for Learning Disabilities (NCLD), where she conducts research, analysis, project management, and builds coalitions to advance NCLD’s public policy and advocacy agenda. Currently, Nicole focuses on issue areas including postsecondary education and workforce development policy, federal appropriations, artificial intelligence and digital access, and teacher policy. She co-chairs two national coalitions, the IDEA Full Funding Coalition and the National Coalition for Public Education, and serves on the Board of Directors for the Committee for Education Funding.
Prior to joining NCLD, Nicole was a middle school math teacher for Fairfax County Public Schools. She also has worked as a Research Assistant with Transition Tennessee where she developed resources for Vocational Rehabilitation providers, families, and students on Pre-Employment Transition Services for students with disabilities. Additionally, Nicole supported policy research and development for the Tennessee Public Charter School Commission.
Nicole holds a B.S. in Middle School Math and Science Education from the University of Maryland and an M.Ed. in International Educational Policy and Management from Vanderbilt University, Peabody College.
Karinna Gerhardt
Ms. Gerhardt is a Manager on the Emerging Technologies team at the Federation of American Scientists (FAS), where she leads tech policy initiatives centered on tech equity, AI, and data privacy. In today’s increasingly complex world where new technologies can create as many problems as they solve, she is committed to advancing policy solutions that safeguard individual rights and dignity.
Karinna previously consulted with Pyrra Technologies as a disinformation and extremism researcher, where she tracked the lifecycle of false narratives through online information ecosystems. Prior to joining FAS, she worked with the Public International Law & Policy Group to implement human rights and atrocity prevention programs in Bangladesh, Yemen, and Libya. Committed to a career in public service, Karinna is a Truman Scholar, and holds degrees in Political Science and International Studies from Macalester College in St. Paul, Minnesota.Mandy DePriest
Mandy is a Curriculum & Content Developer at AI for Education. She has over 15 years of experience in public education, having served as a classroom teacher, library media specialist, and instructional coach. She has also taught education technology courses in higher education settings as well as professional development workshops for teachers on the transformative power of technology. She is committed to ensuring that students are prepared for the dynamic demands of the future by leveraging the power of technology-driven instruction.
-
Amanda Bickerstaff
Hi, everyone. I amanda, the CEO and co founder of AI for education. We're actually also a proud member of the Federation of American Scientist Policy Brief Sprint that went on a couple months ago.Amanda Bickerstaff
We're really excited to have a really focused webinar today on policy perspectives of how we're going to shape the future of education with artificial intelligence. I have an amazing group of professionals with me today, not only on the panel, but also we know in our audience. So I'm actually going to put out a poll right now to see who's with us today. And what we want to do is if, Mandy, you don't mind going to the next slide. We want this always to be a community practice for you all, and so make sure that you're using the chat function to say hello. And we see already some great people from across the country, as well as in Argentina, which is amazing. And then also, if you would like a question specifically to the panel, please use the Q and a function.Amanda Bickerstaff
That way I can know sometimes our chat goes really well. It's one of our favorite things about these webinars. But use that Q and a function. And then also it's just sharing resources. If you have a great resource you want to share, this is your community of practice, and we're really excited to have you here today. So what we're going to do is we're going to start this in a little bit of a presentation style, and then what we're going to do is we're going to move into our panel. And so I'm going to. Karina here has been such a wonderful partner in this work. As part of the Ed Safe AI alliance at AI for education, were given the opportunity to do a policy brief on AI and education. I know it goes beyond that.Amanda Bickerstaff
This policy brief spanned more than just education, but there are 16 really strong briefs in education. And when we submitted this, were like, you know, what would be amazing is to be able to take those insights and those ideas and be able to bring it to you all because you're going to be on the ground. And so we're going to learn a little bit more about the policy sprint, and then we're going to hear from three of the proposals. When we do that, we're also going to start thinking about how we can connect this to the work that you do. So it looks like primarily, I mean, we have such a big range today, but we have people across the entire group, which is pretty awesome, really going to be focusing on the large picture, the macro, all the way down to the micro.Amanda Bickerstaff
So what I'm going to do is I'm going to hand it over to Krina and keep saying hello, and we will get started.
Karinna Gerhardt
Okay. Well, thank you so much, Amanda, and so happy to be here.
Anastasia Betts
Hi, everybody.
Karinna Gerhardt
I'm Corinna Gerhart from the Federation of American Scientists. So, a little bit about us in 1945, FAS was founded by a group of atomic scientists who actually worked on the Manhattan project who were deeply concerned about the misuse of their research to cause harm. When they founded us, they committed FAS to advocating for the responsible deployment and development of science and technology. And today, that work has expanded to encompass a large range of policy domains, both including our nuclear nonproliferation research and work, but also climate and clean energy and emerging technologies like artificial intelligence. And our North Star is always evidence based approaches. A big part of our mission is also our foundational belief in policy entrepreneurship.
Karinna Gerhardt
And that's our belief that individuals who want to make a difference can have an outsized impact on the policymaking process, as long as they are equipped with the tools and community to do so. So here at FIS, we encourage policy entrepreneurs to develop these ideas for policy change, gather feedback from stakeholders, make their ideas more specific, timely, and actionable, and provide them with a platform to publish, amplify their proposals. And can you please go to the next slide? So this is a bit of a visual that shows policy entrepreneurship, how we define it, and is a bit of a preview for what we did in our sprint, our AI legislation sprint that these folks on our panel and at AI for education participated in. Can you please go to the next slide?
Karinna Gerhardt
So, one of the main ways that we work to uplift policy entrepreneurs is by hosting these policy accelerators, or sprints. We open themed calls to our broader community, usually related to a certain topic. So this one in particular, you can see is AI and legislation. I think, as you all probably know, we're in a really key moment and a critical juncture for AI policy development across all levels of government in the US, but that Congress in particular is really considering a range of proposals and approaches. And so we saw a key moment here to uplift the ideas that we know everyone in this community is having and give a platform. So next slide, please. We received a lot of interest, actually, from this sprint, which was really heartening to see.
Karinna Gerhardt
We ended up selecting and developing 15 policy proposals to develop over the course of about two to three months. So a true sprint in iterating on the original ideas that folks submitted, but then also seeking that feedback from stakeholders, including staffers on the Hill. Throughout this process, in addition to other experts in our network and this final slate, you can see, tended to fall into these four major categories. So we had a chunk focused on AI innovation and making sure that entrepreneurship and that R and D ecosystem is supported, a number focused on trust and safety, and making sure that the tools that are being developed responsibly, a few that touched on AI in healthcare and the particular intricacies related to those dynamics. And then, of course, our large chunk related to AI and education. Next slide, please.
Karinna Gerhardt
And I also will say that I've included a link in a QR code at the end of my slide, and we will send those resources out to all of the memos. We encourage you to check all of them out. I will mainly be focusing on the education ones, of course, because of the focus on this webinar, but they're all really great proposals and we're really proud of what our authors came up with and are continuing to push forward on. So some of the major themes that we noticed in the education section of these legislation, sprint memos the one largest category.
Karinna Gerhardt
So we think that it stems from a general understanding that artificial intelligence is upending the paradigms of education that have existed up until this point through access to new learning, new tutoring tools, but also a bunch of new challenges related to how teachers and students interact with each other, what impacts to curriculum need to happen, and also things such as bias issues, potentially with algorithmic evaluations, student work. So, as everybody here knows, lots of things to think about. But we noted a few major areas where some of our participants, including our participants here today, really sought to address through policy proposals. So one of them is related to just the need for more and more resources on research that the federal government would be able to push forward.
Karinna Gerhardt
There is a lot that we don't know about how AI affects educational context right now, because it is developing so quickly. It's something that I struggle to remember sometimes is that chat GPT launched less than two years ago. So very rapidly evolving situation. But Anastasia Betts, one of our panelists here today who will talk more about her and her co author's proposal, had a great proposal that was focused all around accelerating these research pathways to make sure that we can get those answers into policymakers hands as well. There's also another one from Byron Earnest that proposed a national center for AI and education that would pilot a university based research program. So lots of need for more research, more knowledge, more learning. We also know that school districts and teachers are often flying blind in the face of this evolving AI landscape.
Karinna Gerhardt
And so related to the need for research, there's also a need for guidance for schools and for teachers in navigating this AI landscape. One of our proposals proposed a new federally funded grant program and a technical assistance center that would then be designed to help states and then school districts integrate the NIST AI framework, AI risk management framework, excuse me, into our education system. Additionally, guidance on equitable AI adoption for states and local agencies in order to prevent harms I know is a key component of Nicole Fuller's memo, also one of our panelists today. Then finally, there are two of our memos focused on the need for upskilling, professional development, and AI literacy training for teachers, and then also their students who are grappling with AI and the implications for their varying journeys.
Karinna Gerhardt
One of these was of course authored by our AI for education team, who is hosting this webinar today, and they will talk more about their proposal, but they propose developing AI literacy standards for one, in addition to giving more resources for teachers to help service these needs. Another came from Zarak Drozda, who proposed creating a digital frontier training Core Teaching Corps, a national fellowship program for classroom educators to train them on AI. So next slide please. So here you can see the QR code to find the full list of memos and read them more thoroughly. Again, I encourage you to do that. But in general, we closed off our sprint by hosting a couple of briefings with hill staffers and interested hill offices to promote these ideas.
Karinna Gerhardt
And afterwards, we also helped facilitate some one one connections between interested staffers and authors themselves, some of which I know are still continuing, and our authors are being tapped as expert contributors in a variety of ways. I will say that the current approach on the hill in DC to AI legislation is very sector specific. The leadership is tasking committees to work on very more limited AI bills rather than a big package that talks about AI across all of these different sectors. We got pretty good feedback on this approach, especially with our education proposals that are very tailored to specific sectors. The last thing I'll say here is just that these memos continue to exist as artifacts to use for advocating for. These proposals could be taken up by, really, any branch of government.
Karinna Gerhardt
I know they're targeted towards Congress, but many of these could also be incorporated into executive actions or on a state and local level as well. So I know that we're just really excited to see where authors take these ideas. And with that, I think I will hand it back to Amanda and really excited to hear what our panelists have to say.
Amanda Bickerstaff
Absolutely, Karina. And I think that when we first talked about this way back, it feels like a million years ago, it was really interesting to think about how these opportunities to actually be part of the policy conversation with organizations, doing research on the ground, and the work that we are doing directly with stakeholders has been really interesting. So what I'm gonna do is I'm gonna call up our panel, and actually, can we come off the chair just for a second? Mandy and I want everyone to see everyone first. And we have this amazing panel, and we're representing a wide gamut of the presentations and proposals that were done. And so I think that we come from three different lenses based on our own work, as you probably were not surprised. Our work was focused on AI literacy for teachers.
Amanda Bickerstaff
And so what we'll see is the same thing that will reflect the missions of the organizations that Nicole and Anastasia are leading and part of. So, really excited to have you all here today. The way we're going to do the rest of the time is we're going to do a little bit of a deep dive into each of the briefs from each three of us. Mandy representing AI for education. And then what we're going to do is a conversation about how we pulled it together, what are some next steps, and then how you all in your own context and think about this similar type of work. So we're going to start with Nicole. Nicole, it's so good to see you. It's been a pleasure to get to work with you over the last year as part of the SF AI alliance.
Amanda Bickerstaff
And so, Mandy, if you don't mind pulling up that first slide, and I would love to know a little bit more about you and your organization, but also, like, how, like, what did you come up with?
Nicole Fuller
Yeah. Thanks, Amanda. Hi, everyone. So great to be here today. My name is Nicole Fuller. I'm a policy manager at the National center for Learning Disabilities. I also identify as a former educator. I taught middle school and high school, both here in the US as well as in the UK, and I saw we have a lot of folks from around the globe here today, which I just think is so amazing. So thank you again for joining us. And also a really quick shout out to Karena and her team at FAS, because they really did a fantastic job of bringing us together, convening us, giving us the space to take our ideas and really turn them into public policy. This is something I do in my work at NCLD, which I'll get into in a moment.
Nicole Fuller
But just to be able to do this in a synergistic way, in tandem with others who have great ideas in the space, just gave me and my organization so much energy and enthusiasm to do this work. So my organization. I'll give a really quick overview before I talk about our proposal. The National center for Learning Disabilities is an organization that has been around for nearly 50 years. We were founded in the 1970s when here in the US, the Individuals with Disabilities Education act had first become law. So students with disabilities finally had civil rights and protections in schools. So that was a really incredible moment. And recognizing that learning disabilities are very real in schools, they're real in post secondary education, they're real in the workforce.
Nicole Fuller
And our organization, we do touch on all of those, the continuum from k twelve education, higher education into employment. Because learning disabilities are lifelong, they don't just go away at any point in a person's life. And our organization does have this core belief that student with disabilities have long been beneficiaries of advanced in technology. We do recognize that AI tools have immense potential for improving accessibility, for improving learning outcomes. But we also are cautious. And we know that if new AI technologies are not implemented with students with disabilities or with other marginalized groups of students at the forefront, we do risk furthering marginalization that has been present. When we participated in this sprint, we looked a lot to data from a partner of ours, the center for Democracy and Technology.
Nicole Fuller
They've collected over the past several years some really powerful polling data about student activity monitoring software. And all of that data has been broken out and disaggregated by civil rights groups. And they've pulled teachers, they've pulled families, they've pulled students. So we have, thanks to their data, been able to understand this issue from several different standpoints.
Amanda Bickerstaff
Nicole, just sorry to interrupt, but I don't think that everyone knows what student online activity monitoring is. Can you talk a little bit about what that is?
Nicole Fuller
Absolutely yes, I was just going to go into that. So this is AI powered software, often employed on a student device, which now that we're living several years after Covid, we know that many students, if nothing, most all students have a student issued device, often one that they do take home with them. And those softwares have been employed really to seek to address student mental health needs. So monitoring students online activities, for example, tracking their search history, tracking their online activities, sometimes in real time, and using algorithms to detect risk for harm or self harm. I'll talk about this a little bit later. Of course, the student mental health crisis is very real.
Nicole Fuller
But what we have seen emerge from the data from the center for Democracy and Technology is a cause of concern for our learning disabled community, in particular for students privacy, for their safety, and because they are a protected class as people with disabilities under civil rights law. These are on the slide here. A few data points that does show. The first one, that the use of this software and student online activity monitoring is fairly widespread. Most teachers have reported that it's being used. We notice anecdotally from our constituents, like family members, that it is fairly widespread and that they know it is out there in existence. We also know that special education teachers are more likely to report that their students are getting in trouble because of what they are doing online.
Nicole Fuller
And this can even escalate to the point of a referral to law enforcement. So something that they've seen on their online activity may eventually kind of make it outside of the classroom, outside of that school, to law enforcement. Because of this, we do see some chilling effects. And the state have found that 61% of students with a learning disability or learning difference report that they don't share their true thoughts online because they know they're being monitored. So it has that chilling effect. And then families, while they're generally aware of the presence of these technologies, they're concerned and they feel generally left in the dark. And the concern about student data privacy is generally higher of parents of children with disabilities. So this is why we do see disproportionate impact on student disabilities. So this is a problem that we're seeking to address.
Nicole Fuller
And our policy idea, we're really centered around, like Corinna said, actions that the federal government can take, especially supporting state and local school districts who ultimately are the ones making decisions. There's four bullet points here, and each is a way of getting at this issue, and it's multi pronged because we really don't feel like there's one single solution or one single silver bullet to a complex and nuanced issue. So I'll walk through these quickly. But our QR code up in the left hand corner does go into more detail if you're interested in reading it. So the first one is around data collection. And thanks to the data from center for Democracy and Technology, we see a lot of trends emerging.
Nicole Fuller
But we would love to see the federal government charge with collecting this data about technology and especially about their effects on students, like students with disabilities who are protected under civil rights laws. So that's the first part. And data and research go hand in hand. And I know my colleague, Doctor Anastasia Betts, who's going to talk about research. But research and data, of course, go hand in hand, in our opinion, absolutely essential to being able to make progress. The second piece is parental notification. And having just spoken to a lot of parents around this issue, they really feel like they need to be informed about what tools are used, how this information is being shared. And we would like to see school districts playing a role in that and notifying parents about how those technologies are used and the potential risks.
Nicole Fuller
So I think that does require a, some policy change to make that happen, because what we're seeing right now is just lack of transparency and lack of communication around what's in effect. The third one is within the US Department of Education. The Office for Civil Rights is tasked with upholding compliance for all civil rights laws. So this includes section 504. This includes civil rights laws that protect students based on race, ethnicity, gender, sexual orientation. And we do really feel like the Office for Civil Rights, or OCR, can investigate discrimination complaints so ones that may arise because of disproportionate impact, including this discipline that I mentioned. We also feel like they could develop guidance on how schools can do this better. But we also. I'm here in DC. I know that OCR is very strapped for capacity.
Nicole Fuller
The funding and the resources that they get do not allow them to address all of the complaints that come in and do all of the civil rights obligations that they need to do. So Congress, being the appropriating body that it is, can invest in OCR to give it more leverage to make some of this change happen. And then the last one is, we do realize that a lot of decision making is happening on the state and local level, and the federal government is not going to play a particular role in telling a specific school or district you should or should not use this particular technology because those decisions are more localized.
Nicole Fuller
But we do feel like with technical assistance, maybe a technical assistance center that is, federally funded state and localities could get support so that they can focus on how can we adopt safe and equitable technology that includes both policies, that includes procurement. And I think that would definitely need to include a specific focus on students with disabilities. And I know we'll have more conversations about this, but that's an overview of our policy proposal. It's been really exciting to work with others on this. And, you know, we're at a really interesting time here in Washington, DC with the election coming up. But it's definitely a pivotal point where folks in Congress are thinking about how can we address AI. There is momentum, there is energy, and we're really just kind of happy to be here to share our voice and share our perspectives.
Amanda Bickerstaff
Thanks, Nicole. And I think that what's really fascinating about your approach is that AI surveillance and monitoring has been around for almost seven or eight years. But with generative AI and systems that are becoming more capable, the human, the loop component will start to diminish more. And so the need to be able to have this happen now, because I think while it is focused on students with disabilities, I think that what you're talking about absolutely relates also to students from minority backgrounds that have similar types of experiences as well, where their data is looked at differently, as well as just generally parents understanding how systems within their students device will start to make some pretty impactful decisions about, you know, their place in school. So I appreciate you for joining and sharing that. And we're going to go to doctor Anastasia Betts.
Amanda Bickerstaff
Anastasia, we met through this process, which has been pretty great. And it's, we, you know, as we are really excited always to see better research. If anybody's a researcher in the audience do more, we'd love to see it. We find this to be very excited. So were really excited to see your all approach to this research accelerator. So, Anastasia, you want to share?
Nicole Fuller
Sure.
Anastasia Betts
Thank you so much. The slide is already up, so we're ready to go. I'm here representing a group of my colleagues who are not with us today, but hopefully will be watching the recording and we're extremely pivotal in putting this proposal together. So I'm honored to be representing the thoughts of the group of us here. And I, you know, a little bit of a background on the four of us. We have worked together for the last decade on developing edtech solutions to solve the persistent problem of student underachievement in both mathematics and literacy. And in our experience in developing edtech solutions, digital solutions.
Anastasia Betts
We've learned just how difficult it is to get your solution or your product vetted, to do the research that you need to do to prove the efficacy of your product, to prove that it gets the outcomes and the impact at scale. To do this kind of research is very difficult. It's very lengthy, it's very tedious. There are many pieces to the process. And in working with the IES and other government institutions in trying to get things like SSR to s or certification is quite challenging.
Anastasia Betts
So the problem that were trying to solve, or the problem that were attempting to address with our policy idea really centers on the mismatch between the pace of innovation for generative AI, which, as Corinna said, holds so much promise and potential benefit for addressing things like learning loss due to Covid or just underachievement across the board personalization and addressing things like learner variability. Generative AI is really being looked at as having tremendous potential to do all of these things. But the pace of innovation with respect specifically to generative AI, which is what we focused our proposal on, is so lightning speed compared to the pace of educational research, which is many years long. You can spend many years doing efficacy research on your product, where you have the technology evolving, sometimes on a week by week basis.
Anastasia Betts
And by the time that you have completed your research, your product is no longer the same because you've implemented new technology and innovation into it. And if you haven't, then you haven't kept up and you're not leveraging the promise of the innovations that are evolving from week to week. So this became a really big concern and challenge for us as were developing our own solutions. We recognized that it was also a challenge as we looked at other edtech providers and solution developers. And so we really sat down and started to think about how do we solve this problem?
Anastasia Betts
How do we ensure that the edtech solution can really take advantage of and leverage the innovations and technology to serve the needs of the students and the needs of educators, but still put something out there that has been proven or vetted or has the evidence, and that's really key here, that we have the evidence to show that this solution works. Because what we see happening out in the broader k twelve education right now is, and coming from an educator background, I was a teacher as well, for many years. We're so desperate to help our students. We want to help our students. We want to find tools that can help them learn the things that they need to learn. And I look back on my days as an educator as well, where I was an early adopter of everything.
Anastasia Betts
Hey, does that, will that thing help my students? I'm going to try that thing. And we have so many wonderful early adopters type teachers that are adopting generative AI tools now, but there's no evidence to back up the tools that are being deployed in classrooms to validate or prove that they work. And yet we're starting to use them with students on the other end of the spectrum because there isn't a lot of evidence and there are a lot of fears around using generative AI in the classroom. With students, there are just as many folks that are becoming more and more resistant to deploying new technologies in their classrooms. So across the spectrum, we have a problem of the mismatch between the pace of innovation and what we can prove works with kids and gets the impact at scale.
Anastasia Betts
So our policy idea really focused on developing a generative AI and education research accelerator. And the idea was to build a consortium of cross sector stakeholders, whether that be research universities or entities, edtech solution developers and providers, schools and districts, and school sites that would agree to become part of this consortium so that we could develop new, accelerated ways of conducting efficacy research and reduce the time that it takes to do all of these things by, for example, reducing the amount of paperwork or the amount of recruiting. Because all of the folks would already be part of this consortium, they've already agreed to participate in ongoing research, that it would eliminate some of the slower, more bureaucratic, administrative parts of the process.
Anastasia Betts
And we'll talk more about some of the challenges that we encountered and the ways we developed our ideas as we get into the conversation later in this webinar. But that's essentially our policy idea, and we're so excited to be able to share it with you today.
Amanda Bickerstaff
Thank you so much. And I think that we are seeing already a dearth of any type of research, especially around ed tech tools that have been deployed now for almost a full year. And so if you're an ed tech professional building on this call, we had quite a few of them. I think that this is, whether this is moving forward or nothing, the idea of starting to think about how you're going to be able to use your own evidence, your own research, your own data, to be able to start informing how we're thinking about how these tools can impact teachers, students, leaders, families, that's going to be really important. We see just in general, that research and education is very slow and often is kind of glossed over because it's considered to be hard.
Amanda Bickerstaff
But one of the most interesting things about generative AI and AI capabilities is it actually makes it easier to do research as well. Like, it can be the research partner. Instead of having to go and spend a lot of money with a partner that does this, you can start doing this with your own internal knowledge and a couple of right people in the right places with the right tools, which we should hopefully start to see become more of a practice moving forward. So thank you, Anastasia. Now, going over to Mandy. You know, Mandy from AI for education, she might look familiar. She held the pin on this pretty significantly. And I'll just point out, we did go back and forth so many times of like, should it be super pie in the sky or should it be super practical?
Amanda Bickerstaff
And probably unsurprisingly, it's pretty practical. So, Amanda, you want to talk a little bit about what we did?
Mandy DePriest
Yes. Hi. So you are here with us. So hopefully you're familiar with AI for education as an organization. But just as a quick overview, we are committed to building AI literacy for all teachers, students, school leaders, any educational stakeholder. So that was our guiding light as were developing this policy. And like Amanda was saying, it is such a huge issue. There are so many moving pieces and so much that we want to get done. It was kind of hard to narrow it down and make it as tactical and practical as we wanted to be, especially in a way that would be realistic in terms of a policy brief. But the problem that we ended up really focusing on that was kind of what we used to help filter down.
Mandy DePriest
What do we really want to say is, by focusing in on the problem is something we're already aware of that AI has kind of exacerbated, which is that the pace of technological change in society has increased at a faster rate than the pace at which our schools are able to adapt. Schools are like any large bureaucratic organization. They are sometimes slow to adapt to things, whereas the workforce, and even society just as a whole, can move at a rapid pace. And we cite research in our policy brief about the changing nature of the workforce, wherein a lot of job loss and job transformation is going to be connected to AI implementation in the workforce. And notably, we found McKinsey studies that women and people of color are statistically more likely to be impacted by AI job loss.
Mandy DePriest
A lot of it is going to center on customer service and sales and office support, production work, things in which women and people of color represent a larger percentage of that task force. And so we want to make sure, as we are preparing students to matriculate into that workforce and also just to be engaged citizens in a digital society as well. Right. It's not just about making workers. It's about creating people who can be engaged and fulfilled in every level in the modern world. We need to make sure that they have a certain level of AI literacy so that they can step into those roles with minimal risk to themselves. But anything that we want to instill in our students has to start from our teachers. So we kept like going back, going back, and it always ends up with the teachers.
Mandy DePriest
And we cite research again in our study that many teachers feel unprepared to use generative AI tools. It is an arrival technology that has kind of bubbled up from the ground up, where students are bringing it into the school. It's kind of happened like smartphones or the Internet. We kind of have to deal with it whether we want to or not. And among all of the other concerns that teachers have, there just wasn't a lot of, like, formal professional development things being offered to teachers in a systematic way. We know that a lot of states have been developing guidance. I think we're up to 23 now. Is that right, Amanda? Where we have states with guidance, but then that leaves more than half of our us states, including Arkansas, the state that I live in, that do not have guidance.
Mandy DePriest
And so a lot of schools are taking kind of a wait and see approach. I know my local school district that I interact with occasionally has taken that approach, and it's leading to some problems because, as I mentioned, this technology is here. Students are using it, whether you're prepared for it or not. And so we really saw this need to prepare students or teachers to build their own AI literacy so that they could talk about it when issues inevitably arise with students and also just make informed decisions about how they want to incorporate it into their professional practice. We also don't want to risk widening that digital divide with uneven implementation. I actually came across a study which we did not cite in our policy brief because were writing in the spring.
Mandy DePriest
This came out a little later, but it was from Rand, and it said that school districts that serve majority white students, 65% of those in the survey, plan to implement AI literacy training for their teachers, whereas only 39% of schools that served majority students of color were planning to do that. And so that, in combination with the McKenzie statistic about people of color's jobs loss being higher impacted by AI is concerning. And so we don't want to see this situation where the rich get richer and the poor get poorer in terms of digital knowledge. We want to make sure that everyone has access to quality AI literacy training. And what happens is, in the current, there are some current proposals right now that are great starts. Most recently, the Lyft AI act last month is moving forward.
Mandy DePriest
In the house we have the NSF AI Education act, the AI Literacy act, and they're wonderful. They acknowledge the importance of AI literacy, but oftentimes the funds that are attached to them for learning are merit based, competition based. You have to seek them out, apply for them. Oftentimes they're targeted at folks who are going to school to get a degree, to enter the field as a specialist, whereas we are more concerned with like on the ground rank and file teachers who are currently in the trenches and need resources on how to deal with generative AI in the classroom. And so our policy proposed that we establish a dedicated program within the National Science foundation to provide that ongoing AI literacy training for everyone.
Mandy DePriest
And that could be kind of a way to mitigate this imbalance between states that are providing guidance and states that are not, because everyone has something that they can at least look at, even if your state has not issued guidance. We would like to include as part of that the development of AI literacy standards, like a clear definition of what AI literacy is, what behaviors we would see in terms of teachers and students who are exhibiting AI literacy. We'd like to see an accompanying resource hub that teachers could access for free, that they could implement in their schools, as well as grants for teacher training. And then accompanying that would be an outreach program. Because if you have to take it upon yourself to go out and find these opportunities, that makes it less likely that everyone is going to see it.
Mandy DePriest
And so like a systematic outreach pr marketing campaign to let people know this is here would go a long way to equalizing AI literacy. For sure. There is some other stuff in our policy as well. I think Dan will have linked it in the chat where we talk about ongoing assessments and monitoring, making sure that the needs are still being met by the service. But ultimately, we think just having this program available and making schools and districts aware of it could go a long way towards equitably distributing AI literacy.
Amanda Bickerstaff
That's great. I think it's really interesting because this would be really built off of what we do. And I think that even Yvette's question about more extra a literacy for those that don't have access, that's what we're really, we're finding that teachers, once they build their own AI literacy, it becomes much more like of their want to build their students that they're the conduit to being bought in and believing that this is important and then having that connection to their students. And what we're seeing is that, but some of the traditional breakdowns based on the use of AI is that communities of minority communities are actually adopting these tools more than white communities. So there is that real opportunity to really democratize this access.
Amanda Bickerstaff
But we see that there is a direct relationship between, if someone has never used a generative AI tool in education, they have a more negative perception of it, whether that is teachers or students or leaders. We tend to think this is going to be really bad until we start using the tools. And we start using the tools, we start seeing the opportunities that go beyond those things that are considered to be negative and of course, are part of this. And so that idea of that conduit of starting here, we've been talking more and more about this is a year of AI literacy for teachers with the ultimate goal of it being AI literacy for everyone. And what were really hoping to see is that this is something that could be used. We're going to take that segue.
Amanda Bickerstaff
So I'm going to pull off the slides and we're going to bring everybody back up. We are going to have time for one good question. And so the one good question, it's going to be a little bit of a hybrid, is going to be what did you learn through this process? Mandy and I and Corey learned a ton about what our actual goals were and how to navigate a policy approach versus the on ground work we did, which we'll talk about. But then I'd also like how do you with that learning? How can you connect it to the edtech and nonprofit and K twelve leaders and teachers and higher ed teachers here of what they can be doing right now in their own context.
Amanda Bickerstaff
And I know, Nicole, you're going to have really, I could already see it in my head what could be done right now. But I'd love to do that and we'll go around and have that be kind of our wrap because I love how much went into the proposals and hope you all found that useful. But now let's connect it to your practice, too. So Nicole, if you want to go ahead first.
Karinna Gerhardt
Sure.
Nicole Fuller
Thanks, Amanda. Like I said earlier, this was a really amazing process just to learn at our own organization, but from others in this space. And even though the sprint has commenced and we're still having these conversations, and that to me is so fruitful and so important. And then I work in the policy space personally because I do believe that our government and our elected officials, they have a duty to support change and social change in particular, and to uphold the rights of people and especially marginalized people, especially people with disabilities. And public policy is, well, personally, my love language, but public policy is the way that we can do this. So I think having an opportunity to take our vision and really kind of sit down and craft a policy proposal was definitely a learning process for all of us.
Nicole Fuller
And then I think, really, though, we touched on this throughout this conversation. These policy proposals were crafted for the federal government, for Congress, for agencies, mine in particular, and some others. The Department of Education, the Institute for Education Sciences. But anyone can take action. State legislatures, state education agencies, local education agencies, schools, school boards. There's so much that can be done at every level, and I think that is why I'm so glad that we're all convened here today, because I think that there is a role that everyone can play in making things better for kids and families and teachers, and really leveraging technology in powerful ways, while also mitigating their harms and risks. So I think that, to me, is my takeaway, my biggest learning, and I'm really happy that you're all here today and you're interested in these topics and these issues.
Nicole Fuller
And I look forward to just kind of seeing what type of progress we can make from here, because once we have identified things that we want to address, there's really no reason why we shouldn't take the action that we need to make change. And it can be at any level.
Amanda Bickerstaff
Just, first of all, beautiful answer, but I'm wondering if we could just get slightly more tactical. What should be a school or an agency being. Think about these AI monitoring systems. If there's one tip you can give, whether know what they are or have some kind of parent communication, if you can give what you could do, that would be amazing.
Nicole Fuller
I think the parent communication is one of my biggest ones. Transparency, plain language, clearly accessible on paper, on school and district websites. What is this information so that parents can really digest that? I think that is one big takeaway that we can do now and we can do immediately. And then I think, continuing on, if schools are already employing these, and we do know there are challenges, like mental health concerns, like school privacy or school safety issues. And I think that if schools do decide to use technology, that they are keeping a really diligent eye on the types of students that the algorithms are picking up. Is it predominantly kids of color? Is it predominantly kids of disabilities? And I think that's just really important from an equity lens to make sure that certain students are not being harmed in disproportionate ways.
Amanda Bickerstaff
Yeah, and I think that's it. That's why I just really think that while NTLD is very focused on students with disabilities, like this is such a bigger issue. And to be aware, first of all, informed consent, everyone, if you are going to bring in AI powered systems that are going to be making decisions about student safety or student or even their place within the school, you have to be informing not just the parents, but also students. And this is a great opportunity to use generative AI to help you do that better through better communication, through contextualization and linguistic support. But I think the second thing is, if you're going to be implementing these tools, you need to have a process to review what they are in a pilot before you go all in.
Amanda Bickerstaff
And that pilot should look at demographics, what Nicole just said, and it doesn't have to be all for everyone. Try it for a couple of weeks, try a couple of them, see the comparisons and see if there are things about bias that are happening that you wouldn't be aware of if you weren't looking. So I really appreciate that Nicole, and I hope that everyone will have a chance to read the full policy brief because I know you'll find value in it. So, Anastasia, same question. What was the learning? But then let's get practical and tactical. You've got some really, I would think you could talk to ed tech professionals that are on here, but you can also talk to some leaders at higher Ed and K twelve about how they can pick better tools as well. So I'd love to hear from you.
Anastasia Betts
Yeah, I think your final comments on Nicole proposal really provide a nice segue because one of the things that we really learned well, we knew this in advance, but having an opportunity to clarify our ideas by writing about it in a policy proposal form really crystallized things for us in terms of how important evidence is. It is so important to have evidence when you're implementing a solution in your school that is being used by children.
Anastasia Betts
You need to have evidence that this thing, at least either it works and you have data to back that up, or if it's very new and very, just hot off the presses full of new technological innovations, that there still needs to be some evidence that this product is going to have positive impacts on your learners and not just coming into your classroom with no data and no information. And I think a lot of times we've seen this a lot as we looked at edtech solutions across the board that they are not coming with evidence or data about their impact, or it's not substantiated evidence or it's nothing evidence you can have confidence in.
Anastasia Betts
It's very easy to be able to say, well, it works this way, that way or the other way, but not provide the contextual data about how it worked, when it worked, why it worked, who it worked with, or who it didn't work with, which sometimes is obfuscated in the materials that are presented to schools when they're having to choose whether or not to use a program. But the critical challenge that were trying to tackle was this idea. But how do you get the evidence? How do you, as you have a great idea, and I know there are edtech people on the call, you have a great idea, you're building a prototype. How do you get that evidence?
Anastasia Betts
And for us, we really looked at models of this in the FDA and the health sector about how do they bring new promising drugs to the market. We know, like, if you're going to try and cure cancer, you don't want to wait ten years to have a medicine that you can try. How do we get the, and the risks are high. So how do we safely bring these things to market? We saw a parallel in what they were doing with what we wanted to do, and we really used the FDA's programs as a model for what we recommended in our proposal. So back to what you can do. My advice for the folks on this call is especially the school leaders and the early adopters, is demand better evidence from your solution providers, number one. And the second thing is volunteer for research.
Anastasia Betts
One of the biggest challenges is getting schools and districts to sign up to partner with solution developers to test things, to try things out. It's just like what Amanda said a minute ago about try it out for a little while. Try it out. See, don't adopt it, but try it out. Get some evidence. See what it did. For schools and districts and local education agencies to volunteer to be part of this is really critical, to be able to develop solutions that actually get impact at scale.
Amanda Bickerstaff
That's great. I mean, you're speaking our language. And in fact, Dan, if you don't mind putting the six questions for ed tech providers, we demand better, demand some evidence. Like, you know, we are, it is moving so quickly. I was speaking to someone from Microsoft and they were like, you have to think that these tools are only nine months unsold. Like I, then that tells us we should not be fully implementing them, especially those that are student facing. Until there is better evidence. And so I think that this is, like, so important if you are an edtech provider and you're not thinking about the ways in which you're going to start to support real deep knowledge if you have a lesson planning tool, okay, does it save time? But does it actually improve instruction? Does it improve engagement? Does it improve teacher?
Amanda Bickerstaff
Burnouthen, like, what is it doing? Is it just replacing the teachers, paid teachers? Or is it creating better opportunities for high quality instruction? If you're using it with students, is it something that's actually motivating them, or is it demotivating them? Or is it something that is highlighting what's working or not working already in the ways in which we work together? And I think that this is, like, it's frustrating for us on the other side, because people, I think, look at us and say, can you not recommend a whole bunch of tools?
Amanda Bickerstaff
And we've stuck to being tech agnostic, but also, like, non like, non ed tech tools yet, because we just have not seen the technology reach a level of its own efficacy on its own, and then seeing it applied in ways that have really made a difference, and we're really open to it. So, like, everyone, like, we. I just want to say we are so open to this being a part of how we work. But until there is better evidence that these actually do support teaching and learning, I think we have to take a cautious approach. And you do not. If you're an itch, if you were in a leader position, I know you're being absolutely pressed to buy these tools from teachers, from parents, from students, from outside.
Amanda Bickerstaff
Like, it's okay to take a step and say, hey, we're gonna actually do a little bit of, like, a competition. We're gonna try a couple things and see which one works better, and then we're gonna roll it out in a meaningful way, and we're going to build our own evidence base. So I think that's just going to be really important. So I appreciate, that's why we get very passionate about this. So just really appreciate the work that you all did. And, you know, you guys have an amazing team. So Mandy was very funny. I was to say, again, went in circles, everybody. We actually did. We fully rewrite this, like, a week before. Like, not fully, but, like, we changed more than once.
Mandy DePriest
We made some major revisions.
Amanda Bickerstaff
Don't judge us. We are. We do have a slight perfectionist bent here at AI for education, but it was really interesting to think of, like, how are we don't we've never done this right. We are practitioners, we're builders. And so I know it was a bit of a challenge to even figure out what the level of what we want to do is. We can talk a little bit about that and then that really practical end well.
Mandy DePriest
And I'll shout out Corinna and the FAS team as well, for being amazing resources and structuring the conversation in such a way as to walk us through this process as folks who are less familiar with policy. And one of the things they kept saying to us was like, take a step back, take a step back. You know, what is the, like, simplest manifestation of this? Because we're big ideas people. We could talk philosophy all day, but in terms of just making, like, policy, having this laser focus on practicality and, like, line item budgeting and, like, so that was really helpful for us in terms of crystallizing our vision.
Mandy DePriest
And so I think I harking back to Nicole's comment on things that we are trying to get enacted at a federal level that you could bring down to the micro level, to the state, and to even the local district or even the school level is doing that same process of stepping back and stepping back. And it's easy to think, okay, this is all happening so fast, we need to do everything all at once, and it is always going to be helpful to stop. I think I've heard the saying, go fast, but don't hurry. And so being very mindful and conscious about talking to your stakeholders, getting a sense of what people need and what they want, what they're already doing, and then using that to inform the development of policies for your district. I wouldn't wait for federal policy.
Mandy DePriest
One of the things I learned at this is that it's an extremely long timeline, over a year, likely for something to go from a proposal to being enacted, passing the House, the Senate and being worked into a budget. And AI, of course, is moving at a much faster pace than that. So I would definitely look at ways you can implement this at a smaller scale on your level. Again, always with that looking back and stepping back and what is the simplest thing we can do first right now that centers our people, that is going to move us in the right direction?
Amanda Bickerstaff
Absolutely. And it was. I just want to bring Karina back on. First of all, thank you to our panelists and Corinna and the whole team at FAS. I think that what you hear, what you've heard is a couple of things. Number one is there are so many important angles to take. When I think about AI and its impact on education, you will see that we are three organizations that have the same hopes, right, of an AI future in which everyone's, we're doing better things by our kids, where education has gotten better, but took three very different approaches. And if you look at the other approaches, you'll see that there is so much richness to this discussion. But I think that the most important thing that we can do is take this idea of you can be involved.
Amanda Bickerstaff
So Nicole, Anastasia, and Mandy said, like, you can do this. Like, if you are whatever part of the world you're in, like, you can be a part of this, because no one has it nailed. And voices like ours that didn't exist, you know, a year and a half ago can be this big because of this moment in time. So we just really hope this has been galvanizing for you all to see these approaches, but also think about how to bring this back to your context. So I just want to say thank you to everyone. We're so amazing to have you as part of our bigger family now. And I just really appreciate everybody within the audience for hanging out with us, and we hope that this has been useful for you. Please make sure to follow the different proposals.
Amanda Bickerstaff
And we're gonna drop in the channel right now everyone's contact information. So if you wanna get in touch with Corinna, Nicole, Anastasia, you know where we are. We're gonna do that now, but just really appreciate you all. And we'll be back next week. We're doing another one focused on female superintendency and, like, the impact on AI on women educators and leaders and students. So just really appreciate everybody. Have a great day. Morning, evening, wherever you are, and we look forward to the next one. Thanks, everybody.