- Synaptiks
- Posts
- Bridging the AI Adoption Gap
Bridging the AI Adoption Gap
Designing an Interactive Pedagogical Agent for Higher Education Instructors

Review of the paper: https://arxiv.org/pdf/2503.05039
Context and Problem to Solve
Imagine a teacher standing in front of a classroom, holding a powerful tool that could make their job easier, their lessons more effective, and their students more engaged. Now imagine that same teacher deciding not to use that tool—not because they don't want better outcomes, but because they don't trust the tool, don't understand it, or aren’t sure how to use it correctly. That’s the current situation with many teachers and Artificial Intelligence (AI).
AI is rapidly changing the world of education. Tools like ChatGPT or intelligent tutoring systems can help teachers plan lessons, personalize learning for students, and automate repetitive tasks like grading. These tools are powered by Large Language Models (LLMs)—very advanced AI systems that can generate text and answer questions in a surprisingly human-like way.
However, despite this potential, many university teachers don’t use AI tools in their classrooms. Some are excited about it (“AI-positive”), while others are skeptical, unsure, or even afraid of its impact (“AI-conservative”). This hesitation isn’t just about learning to use new software—it’s about trust, values, and feeling in control.
So, the key problem this paper tries to solve is:
How can we design AI tools that support and empower teachers, especially those who are skeptical of AI, rather than alienating or overwhelming them?
This is not just a technical problem—it’s also a human problem. It involves understanding how teachers think, what they need, and what makes them feel confident in their teaching practices.
Methods Used in the Study
To understand and address this problem, the researchers didn’t jump straight into building an AI tool. Instead, they used a human-centered design approach. This means they started by listening to the people who would use the tool: teachers and teaching experts.
Here are the steps they followed:
1. Interviews with Pedagogy Experts
They interviewed 5 pedagogy experts—people who specialize in teaching methods and help instructors improve their teaching. The goal was to understand:
How these experts currently support teachers
What challenges teachers face in adopting new tools
What kinds of advice or support they find most helpful
2. Participatory Design Session
Then, they ran a design workshop with 10 more pedagogy experts. In this session:
Experts were shown a storyboard of a prototype chatbot that could help teachers with teaching advice.
The chatbot could interact with teachers, offering ideas based on their questions (e.g., “How do I help students who are struggling with participation?”).
Experts gave feedback on how realistic and helpful the chatbot seemed.
3. Testing AI-Generated Suggestions
The researchers also showed participants teaching suggestions written by an AI and asked them to evaluate their usefulness, accuracy, and relevance. This helped them understand the strengths and weaknesses of what AI currently produces.
The idea was not to build the final tool yet, but to gather insights about what such a tool would need to look and feel like to be truly helpful and trustworthy.
Key Results of the Study
The study uncovered some very important findings that show us why many teachers hesitate to use AI—and how we might help them overcome that hesitation.
1. Teachers Need to Feel in Control
Some instructors are worried that AI will take over their teaching or suggest things that don’t fit their style. The study found that teachers want to keep their autonomy—they don’t want to be told what to do by a machine.
💡 Insight: Tools must be assistants, not bosses. Teachers should be able to choose how much they rely on the AI.
2. Trust is Critical
AI-conservative teachers often worry that the AI:
Gives wrong or shallow advice
Doesn’t understand the classroom context
Is just another tech fad that won’t last
To earn trust, AI tools must:
Be transparent about how they generate advice
Show where the information comes from
Offer suggestions that are grounded in pedagogical best practices
3. Peer Validation Helps
One surprising result was the importance of social transparency. Teachers are more likely to try a tool if they know other teachers like them are using it and getting good results.
This is a bit like trying a new restaurant because your friend recommends it—peer approval builds confidence.
4. Suggestions Must Be Personalized
AI suggestions must feel relevant. A teacher with 20 years of experience needs different advice than a first-year teacher. The AI needs to adapt to:
Teaching experience
Subject area
Class size and setting
Specific student needs
The experts found that many AI suggestions were too generic or didn’t consider context.
5. Balancing Simplicity and Control
Some teachers want a tool that’s simple and easy to use, while others want something more customizable. The tool must offer both:
Quick, simple help for beginners
Deeper features for experienced users
Conclusions and Main Implications
This study shows that building AI for education isn’t just about making a smart machine—it’s about making a smart partner for teachers.
Here’s what we learned:
✅ AI tools must be:
Trustworthy: Explain what they do and why.
Respectful of autonomy: Let teachers stay in control.
Socially validated: Show that other teachers are using and benefiting from them.
Personalized: Adapt to the teacher’s context and experience.
❗ If we fail to design tools this way:
We risk widening the gap between AI-positive and AI-conservative teachers.
Students taught by conservative instructors might miss out on helpful innovations.
We could create inequality in the quality of education.
The takeaway? Good design can bridge the AI adoption gap.
Reply