Keeping humans at the center of edtech is the top suggestion in the federal government’s first stab at helping colleges determine how they should teach with AI. With technology like ChatGPT advancing with lightning speed, the Department of Education is sharing ideas on the opportunities and risks for AI in teaching, learning, research, and assessment.
Enabling new forms of interaction between educators and students and more effectively personalizing learning are among the potential benefits of AI, the agency says in its recent report, “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations.” But the risks include a range of safety and privacy concerns and algorithmic bias. To mitigate them, the department strongly emphasizes keeping humans in the driver’s seat.
“We envision a technology-enhanced future more like an electric bike and less like robot vacuums,” reads the report. “On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement.”
Educators and policymakers should collaborate on the following principles:
- Emphasize humans-in-the-loop: As students and teachers begin interacting with chatbots to help with coursework and plan personalized instruction, teachers must stay abreast of safety precautions if things begin to fall astray. Keeping other teachers involved in loops is a vital way to remain vigilant. Additionally, teachers must stave off becoming so reliant on AI that it depletes their judgment. AI is known to commit errors and make up “facts,” so teachers must analyze AI prompts to flag errors.
- Align AI models to a shared vision for education: The educational needs of students should be at the forefront of AI policies. “We especially call upon leaders to avoid romancing the magic of AI or only focusing on promising applications or outcomes, but instead to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment,” the Department of Education says.
- Design AI using modern learning principles: The first wave of adaptive edtech incorporated important principles such as sequencing instruction and giving students feedback. However, these systems were often deficit-based, focusing on the student’s weakest areas. “We must harness AI’s ability to sense and build upon learner strengths,” the Department of Education asserts.
- Prioritize strengthening trust: There are concerns that AI will replace—rather than assist—teachers. Educators, students and their families need to be supported as they build trust in edtech. Otherwise, lingering distrust of AI could distract from innovation in tech-enabled teaching and learning.
- Inform and involve educators: Another concern is that AI will lead to a loss of respect for educators and their skills just as the nation is experiencing teacher shortages and declining interest in the profession. To convince teachers they are valued, they must be involved in designing, developing, testing, improving, adopting, and managing AI-enabled edtech.
- Focus R&D on addressing context and enhancing trust and safety: Edtech developers should focus design efforts on “the long tail of learning variability” to ensure large populations of students will benefit from AI’s ability to customize learning.
- Develop education-specific guidelines and guardrails: Data privacy laws should be reviewed and updated in the context of advancing educational technology. The Individuals with Disabilities Education Act (IDEA) is a potential candidate facing reevaluation as new accessibility technologies emerge.
More from UB: Only 7 U.S. universities make THE’s sustainability impact rankings’ top 100 list