Your blog post

Blog post description.

1/8/20262 min read

Meet the "Glitch Busters": The Student Team Teaching AI Safety to Kids

In a world where Artificial Intelligence (AI) is becoming as common as textbooks, a team of elementary students is stepping up to ensure their peers use this powerful technology safely. Meet the AI Glitch Busters, a project dedicated to teaching kids in grades K-5 how to navigate the "tricky" parts of AI with confidence and critical thinking.

The Problem: When Smart Tech Makes Mistakes

AI can often seem like it has all the answers, but as the Glitch Busters discovered, it can frequently get things wrong. This leads to several major issues for young users:

  • AI Hallucinations: Sometimes AI simply makes things up, like claiming the moon is made of cheese or providing a pizza recipe that uses glue instead of sauce.

  • AI Bias: Because AI learns from human data, it can pick up unfair habits, such as favoring certain writing styles or failing to recognize different skin tones equally.

  • Privacy Leaks: Some AI tools might ask for personal details like a home address or birthday, which can be dangerous if shared.

  • Bad Habits: AI can even learn to be rude or spread misinformation if it isn't guided by clear, safe rules.

The Solution: A Fun Way to Learn

To solve these problems, the team built a multi-layered educational platform hosted on GitHub Pages. Their solution includes:

  1. Kid-Friendly Videos: The team used Adobe Express to create 12 animated videos that explain complex safety topics in under two minutes.

  2. Interactive Quizzes: Students can test their knowledge with a pool of over 60 random questions and earn digital badges for correct answers.

  3. AI Text Classification Tool: To go beyond just watching and reading, the team built their own AI training app. This custom tool helps users practice identifying if a chatbot's answer is biased, a hallucination, or a privacy risk.

Built "For Kids, By Kids"

What makes the Glitch Busters project truly unique is the team's process. Starting in October 2025, the students interviewed experts and teachers to understand the real risks of AI in classrooms. They used ChatGPT, Google Gemini, and Claude not just to research, but to help write and debug their code.

"We want kids to think for themselves, not just believe everything a chatbot says," the team explains. By turning safety lessons into a game, they are helping their community grow up to be careful and confident with the technology of the future

What’s Next?

The Glitch Busters aren't stopping here. They plan to volunteer at more schools to show students and teachers how to use their tools. They even hope to one day build an AI safety chatbot specifically designed to answer kids' questions in simple, friendly language.

For this team, keeping AI kind and respectful is more than just a school project—it’s about being a digital superhero.