The Top 6 Questions for Schools to Ask Gen AI Edtech Companies
GenAI Edtech vendors will come knocking on school doors in droves this year. Here are our Top 6 questions to ask them about their solutions.
AI Capabilities and Limitations
Q1: We know that generative AI (GenAI) is a new technology with extensive limitations. How does your product indicate when it's uncertain or requires human review? What controls do you have in place to identify and lower hallucinations?
Why it’s important: Due to their training, most GenAI tools hallucinate - when they output incorrect or misleading responses. It's important to know how likely these hallucinations are and establish protocols to mitigate any harmful effects.
Mitigating Bias
Q2: It’s important that the tools we use do not cause harm to our students or teachers. What steps are you taking to identify and mitigate biases in your AI models? How will you ensure fair and equitable outputs?
Why it’s Important: Understanding how an AI tool handles potential bias is critical because AI models can inadvertently perpetuate and even amplify existing biases if they are present in the training data (e.g., ChatGPT has been trained on information from the internet). This could lead to unfair outcomes for students and may not align with the school's commitment to inclusivity and fairness.
Student Privacy and Ethical Data Use
Q3: Protecting student data privacy and ensuring ethical use of data is a top priority for our school. What policies and safeguards can you share to address these concerns?
Why it’s Important: Privacy and ethical data use are top concerns in any educational setting. Ensuring the AI tool has robust policies and safeguards is crucial to protect sensitive student data and maintain trust within the school community. It also ensures the school's compliance with legal and ethical standards.
Human Oversight and Quality Control
Q4: Our educators need to validate and trust AI-generated content before use. What human oversight and quality control measures do you use? How do you ensure feedback from teachers/students is being collected and actioned?
Why it’s Important: Human oversight ensures that the AI tool is operating as expected and maintains a high standard of quality. This is important to build trust among educators, who need to rely on the tool's outputs. It also helps catch and correct any mistakes or misjudgments made by the AI. And it’s important that teachers are equipped with an understanding of new tools through training and piloting.
Evidence of Impact
Q5: We need evidence that your AI tool will improve learning outcomes for our student population and/or effectively support our teachers. Can you provide examples, metrics and/or case studies of positive impact in similar settings?
Why is it Important: As generative AI technology is at its earliest stage of development, it’s important to ask for evidence of proven effectiveness. This will help ensure that the AI tool will deliver the desired improvements in learning outcomes and/or teaching impact.
Accessibility and Inclusive Design
Q6: Our school needs to accommodate diverse learners and varying technical skills among staff. How does your tool ensure accessibility and usability for all our students and staff? What PD is available?
Why it’s Important: Accessibility and inclusive design ensure that all students and staff, regardless of their abilities or resources, can effectively use the AI tool. This is critical for promoting equality in the learning environment and ensuring that the tool serves all users effectively. It also aligns with legal requirements and ethical standards for inclusivity.