How revolutionary ed-tech creates new challenges for campus leaders
The COVID pandemic has served as the catalyst for a radical transformation of traditional higher-education systems.
And the transformation will continue to evolve and grow even after the end of the pandemic, as demographic factors were already driving a shift toward online education. COVID merely accelerated the process.
Consider that some of the most populous countries in the world also boast some of the highest economic growth and upward social mobility: there are more people with more disposable income than ever before. That means that the demand for university and higher education will be higher, and could even increase beyond the ability of ‘physical’ infrastructure to accommodate it.
Millions of students will enroll in schools and universities, causing a glut that traditional teaching systems will simply not be able to manage. And it’s not just a matter of physical space. It’s one of quality: if the instructor-student ratio increases, the quality of education inevitably suffers.
Universities will need to adopt radical solutions just to keep their current enrollment rates, maintain their funding and reputation and attract the best students. Ed-tech and artificial intelligence offer the most cost-efficient solution, facilitating online courses, and featuring more effective forms of learning and a variety of tools to support quality and fairness in the classroom.
An ethical advantage
Ed-tech’s AI paradigm for education involves solutions that largely track students’ progress while offering them personalized or recommended study paths to improve retention—as well as helping them review, better comprehend or simply deepen their knowledge of a subject.
Through tools like chatbots that can instantly answer students’ questions, personalized help is available at every stage of their learning path. From an ethical standpoint, ed-tech intended and implemented in this sense presents an ethical advantage over traditional education as it works toward a democratization of knowledge, making it possible for every individual on the planet—regardless of gender or cultural background or skin color—to pursue advanced education.
Some companies, such Civitas Learning, are already offering artificial intelligence programs that help students map out their university paths by planning for exams, offering enhanced tutoring and organizing study habits.
Instructors can also benefit from cutting-edge AI tools. Many universities have adopted Gradescope, a technology that allows them to automate the evaluations of exams, assign scores and correct errors.
The ethical advantage—apart from the ‘egalitarian’ approach inherent of an algorithm—is the extra time that professors can devote to the progress of their students; time that would otherwise have been consumed by grading, with all the bias that implies, including fatigue and personality conflicts.
Ed-tech is useful in reducing bias because AI can be used to design appropriate tools that account for prejudicial attitudes and behaviors in human beings.
For example, humans can discriminate against the candidates they choose to invite for a job interview simply by looking at their names. A U.S. study showed that those with European names had a better chance of being invited than those with African-American names. Unless the algorithm was designed to avoid this specific issue, the same would happen if the choice had to be made by an AI system.
AI-enabled ed-tech can also enhance problems related to student privacy or biases not necessarily tied to gender or race. The bias programmers could incorporate corrupt data into algorithms, favoring certain types of students, reducing the diversity of data and compromising security.
In predictive analytical models, such as those used in AI, there are often implicit biases deriving from human error. Algorithms can often obscure the needs or requirements of particular groups of students.
This is why Georgia State University, a leader in the use of predictive analytics, excludes addresses and any other non-behavioral or immutable dimensions from their predictive models. It is important that institutions monitor these algorithm inputs to reduce the amount of human bias present to ensure fair access for all students. That fairness extends to limiting information gathering.
Simply, there is some information that students may not want to disclose without permission, such as health problems, financial history or similarly sensitive subjects.
Bias against artifical intelligence
From the humanistic point of view, intelligence—whether artificial or not—raises legal, ethical and philosophical questions that may contribute to an overall human bias against AI. At the most basic level, the main challenge of AI is to gain popular acceptance.
There is a natural human tendency to consider AI as driving a competition between man and machine. Therefore, it’s crucial to present this technology as a tool designed and implemented to increase a human’s abilities—rather than perform a task or a job better.
In fact, some suggest changing the semantics altogether: instead of artificial intelligence, “augmented intelligence” might gain wider acceptance and therefore more use and development. This will help skeptics who see AI as the path toward a dystopian future to become more open.
Artificial intelligence has opened opportunities that are exciting, yet largely unexplored. And like all areas of human endeavor that become the object of intense inquiry, there is a tendency to rush the ‘adoption and adaptation’ of new realms.
Carrie Purcell is co-founder and chief of partnerships and strategy for Tech AdaptiKa.