As artificial intelligence systems grow increasingly sophisticated, we find ourselves at a crossroads that will define the future of human agency and autonomy. While the promise of AI to enhance our lives is undeniable, we must confront an uncomfortable truth: Our increasing reliance on automated systems may be quietly eroding our capacity for independent decision-making.
Consider how we navigate our daily lives mindlessly following GPS directions down questionable backroads or falling into a social media rabbit hole. When AI systems begin making choices that align with our preferences with uncanny accuracy, both adults and young people face a compelling yet dangerous proposition: Why wrestle with complex decisions when an AI can instantly provide optimized solutions? Why learn complex problem-solving when AI can handle it? Why develop social intuition when algorithms can optimize our interactions?
The threat extends far beyond skill atrophy—it’s about the systematic surrender of human judgment to algorithmic decision-making, particularly for developing minds. While adults might have developed foundational critical thinking skills before this AI revolution, today’s youth risk having these crucial cognitive developments short-circuited by algorithmic interventions. Children growing up in an AI-saturated world may never develop certain fundamental cognitive skills that humans have relied upon for millennia. The convenience of AI threatens to become a cognitive crutch, stunting the very capabilities that define human intelligence.
We must fiercely preserve spaces for human-only decision-making, especially in spaces demanding emotional intelligence, ethical judgment and creative thinking. Not every process needs or benefits from automation, particularly those involving nuanced human evaluation.
However, I am not arguing for rejecting AI advancement. Rather, it’s a call for conscious integration of these technologies in ways that enhance rather than replace human capabilities. We need to design AI systems that serve as tools for human empowerment rather than substitutes for human thought.
Several key principles should guide this approach:
- AI systems should be designed with transparency as a core feature, allowing users to understand the basis of algorithmic recommendations and decisions.
- We must preserve spaces for human-only decision-making, particularly in spaces that require emotional intelligence, ethical judgment, or creative thinking.
- Educational systems must continue to evolve and emphasize skills that AI cannot easily replicate – critical thinking, emotional intelligence, and creative problem-solving.
At Colorado College, human agency is the center of AI adoption. Our framework emphasizes critical thinking as the cornerstone. Through a collaborative initiative between Information Technology Services and the Crown Center for Teaching, we’re building a comprehensive program to evaluate and potentially integrate AI technology across our entire college ecosystem.
More from UB: How to turn recruitment promises into real results
This partnership focuses on educating our community about AI’s capabilities and limitations while carefully assessing its potential applications. Faculty, staff and students participate actively in this evaluation process, providing crucial feedback about their learning experiences and helping shape our approach to AI integration.
As I engage with classes across campus to discuss AI’s impact, I emphasize our responsibility to approach these technologies with extreme caution. Students are required to think critically about its outputs, understanding their role as tools rather than arbiters of knowledge. In my discussions with students, we explore system limitations, examine potential biases and debate the ethical implications of AI deployment. We’re ensuring that every student grapples with the moral dimensions of AI while learning to use these powerful tools effectively and ethically.
This commitment is exemplified in two key initiatives: Our new AI in Business curriculum, developed by Professors Lora Luis Broady and Ryan Banagale, which examines ethics, equity and societal impact, and their comprehensive AI @ CC program that takes a holistic approach to AI education and responsible adoption.
The path forward requires this kind of thoughtful balance. We must harness AI’s potential while preserving human agency and autonomy. This means developing and implementing AI systems that amplify human capabilities rather than replace them. It requires a commitment to critical assessment, evaluating not just their efficiency but their broader impact on human cognition and society.
The stakes could not be higher, particularly as we witness the devastating impact of AI-amplified misinformation and disinformation campaigns globally. These technologies are being weaponized to erode trust, fragment communities and manipulate public opinion at an unprecedented scale.
The rapid proliferation of synthetic media, deepfakes and AI-generated content threatens the foundations of informed democratic discourse. This technological capability to manipulate reality at scale represents one of the most serious challenges to social cohesion and democratic institutions in our time.
How we integrate AI into our lives today will determine whether future generations inherit a world where technology empowers human potential or withers in the shadow of artificial intelligence. The choice—and the responsibility—is ours. At Colorado College, we are moving responsibly to harness AI’s benefits while protecting against its capacity to deceive and divide and replace.
As we stand on the brink of unprecedented technological advancement, we must ask ourselves: Are we creating a future where AI serves humanity, or one where humanity serves AI? The answer lies not in the technology itself, but in how we choose to develop and deploy it. Our autonomy—and that of future generations – depends on getting this right.