Two years after generative AI became part of the mainstream lexicon, higher education institutions are still grappling with how to integrate it into their educational missions and campus operations.
In 2023, much of the conversation about AI centered around academic integrity and fears that students would use AI as another cheating tool. In 2024, more universities started thinking about adopting these tools, though an Inside Higher Ed survey of chief technology officers showed that just 9 percent said they believed higher education is prepared for AI’s rise.
As the technology continues to evolve at a rapid pace, no one knows for sure how AI will influence higher education in 2025. But several experts offered Inside Higher Ed their predictions—and some guidance—for how colleges and universities will have to navigate AI’s potential in the new year.
AI has specific implications for teaching, learning, research, governance, educational technology, equity and access, but one commonality emerged across all of the responses: Ignoring AI isn’t an option for colleges and universities in 2025.
The following predictions have been edited for length and clarity.
‘AI as Critical Infrastructure’
Ravi Pendse, vice president for information technology and chief information officer at the University of Michigan
Twenty twenty-five will be the year when higher education finally accepts that AI is here to stay.
Next year and beyond, we will see AI impact almost every aspect of university life. What will that look like? Every student who graduates from a higher ed institution should have at least one core course in AI or significant exposure to AI tools. We will be doing a disservice to our students if we do not provide opportunities to acquire these skill sets. We will see personalized AI assistants and adaptive AI tutors augmenting classroom instruction. These tutors will not only improve educational outcomes, but their ability to respond to questions 24 hours a day will likely improve the mental health of our students by reducing stress.
In 2025, university administrators must start thinking of AI as critical infrastructure and identify resources to invest in it. Enterprise AI services will augment procurement teams, aid in HR practices and improve travel booking. AI will streamline and potentially help with perceived bias in admissions processing and financial aid disbursement. AI will help enhance funded research by streamlining different research administration functions such as grant identification, grant submissions and post-award processes.
Rejecting AI would be like rejecting the internet 50-plus years ago. As that realization grows, I believe that 2025 will be the year most universities finally come to terms with AI on both a policy and a pedagogical level. Any institution that chooses to ignore AI will likely find itself struggling for relevance in the next decade.
‘A Year of Discovery’
Trey Conatser, assistant provost for teaching and learning and director of the Center for the Enhancement of Learning & Teaching at the University of Kentucky
Trey Conatser
The concern over academic integrity and disruption will certainly persist—and maybe even intensify—as AI technologies become more complex and institutions adopt more AI platforms and tools. Because of this, we’ll see more of a conversation on what it means to develop skilled and, more importantly, critical users of AI from instructors, staff and especially students. In addition to training programs and resources, we’ll need a lot of culture work, dialogic education and hands-on engagement across the full stack of higher education, embracing rather than avoiding the differences in disciplinary and professional contexts.
We’ll also likely see a shift in the attention of policy discussions.
I’d like to think that 2025 will be a year of discovery. Intentional and critical designs will open up new possibilities for the mission and work of our disciplines and institutions.”
—Trey Conatser
Up to now, AI policy for teaching and learning has tended to focus on how individual students use AI in coursework and potential academic integrity violations. But with recent developments, such as courses running on AI-produced materials and interactions, we’re going to see more policy discussion about AI as an infrastructure with respect to institutional mission, ethics and values as well as regulatory standards for accreditation and government oversight.
I’ve previously suggested that if 2023 was a year of experiment, 2024 would be a year of design when it comes to integrating AI into curricula and operations. I’d like to think that 2025 will be a year of discovery. Intentional and critical designs will open up new possibilities for the mission and work of our disciplines and institutions.
Asking the ‘Hard Questions’ in 2025
Katalin Wargo, director of academic innovation and pedagogical partnerships at the Studio for Teaching & Learning Innovation at William & Mary
Rather than providing clear-cut solutions, 2025 is likely to bring about more questions.
Katalin Wargo
Across higher education, we are starting to have more intentional and nuanced conversations about what developments in AI mean for colleges and universities in the long term and, by extension, [for] the students we serve. The strength of these conversations will come from asking the right questions to inform institutional AI use and policy.
At William & Mary, we see this as an opportunity to critically examine both the promises and perils of AI. What does AI promise related to promoting equity at our institution? What are the threats to equity that we need to be aware of and actively guard against? How can we ensure equity in access to powerful AI tools that can support learners and prepare them for future careers while simultaneously navigating the inequities that AI may perpetuate? How can we be good stewards of the environment by promoting responsible use? How does our institution define responsible use?
In 2025, I anticipate we will do what higher education is so well attuned to do, and that is continuing to ask the hard questions.
‘Potential and Problems’
José Antonio Bowen, senior fellow at the American Association of Colleges and Universities
José Antonio Bowen
Like any new technology, AI comes with both potential and problems. If faculty take a passive approach, we could end up with more cheating (and less learning), less student contact (as AI chat bots replace human contact and AI modules do more teaching), and more inequity (especially as the job market continues to swing to those able to use AI effectively and AI continues to push out interns).
Lack of safeguards and inappropriate use have already led to tragedy, and neither “just say no” nor “figure it out on your own” will suffice.
Used well, AI could reduce inequity and bias. Humans might struggle not to lower grades on papers that disagree with us or lean toward candidates who share a methodological approach, but AI just needs instructions to alter known biases. AI also makes it much easier to personalize assignments and support. The key here is recognizing where AI might be able to support faculty work (analysis of learning and accreditation reports) to create more time for the human relationships with students.
‘Leveraging AI’
Mark McCormack, senior director of research and insights at Educause
Mark McCormack
As we look toward 2025, I anticipate conversations about AI and ed-tech tools in higher education to shift toward an exploration around the ethical use and meaningful and measurable impact of these technologies.
AI technologies are rapidly expanding their role in the classroom, with the potential to transform teaching and learning and the student experience in profound ways—both positive and negative. As we navigate these expanding roles, higher education institutions will need to regularly update their guidelines and policies to align with larger societal expectations and regulations around the use of these technologies.
Higher education institutions will need to regularly update their guidelines and policies to align with larger societal expectations and regulations around the use of these technologies.”
—Mark McCormack
Perhaps even more importantly and more challenging, institutions will need to wrestle at a deeper level with what these technologies mean for how we think about education and will need to be proactive in aligning the adoption of these technologies with the institution’s mission and goals. Institutions will also need to help their staff, faculty and students be more mindful and responsible users by providing digital literacy training, robust resources and innovative strategies for leveraging AI for enhancing pedagogy and improving student outcomes.
‘Maximizing The Value’
Claire L. Brady, president of Glass Half Full Consulting, LLC
Claire Brady
Institutions will focus on maximizing the value of their AI investments, using data-driven insights to refine their applications and ensure measurable outcomes in teaching, learning and student success.
The focus will not only be on how AI enhances operations, but also on how it creates equitable and personalized educational experiences. AI-powered tools will play a critical role in tailoring academic pathways, identifying at-risk students earlier and fostering more inclusive environments that meet the diverse needs of learners. The most successful institutions will be those who create frameworks for AI integration that enhance rather than replace the human elements of education—turning what was once seen as a threat to academic integrity into a catalyst for more personalized, engaging and effective learning experiences.
[Twenty twenty-five] will be a year of recalibrating expectations, proving impact and ensuring that AI in higher education is leveraged thoughtfully, inclusively and sustainably.”
—Claire L. Brady
At the same time, accountability and ethical governance will take center stage. With widespread integration of AI into decision-making processes, institutions will grapple with questions of transparency, bias mitigation and long-term sustainability. Stakeholders—including students, faculty and policymakers—will demand clarity around how AI systems function, how data is used and how these tools align with institutional missions. Regulatory frameworks and industry standards will emerge to guide responsible AI usage, ensuring that it enhances—not erodes—academic integrity, student privacy and institutional trust.
Twenty twenty-five will be a year of recalibrating expectations, proving impact and ensuring that AI in higher education is leveraged thoughtfully, inclusively and sustainably.
Improved Health Care Education, Patient Outcomes
Elisabeth McGee, senior director of clinical learning and innovation at the University of St. Augustine for Health Sciences
Elisabeth McGee
In 2025, discussions around AI tools in higher education will focus on their transformative potential to empower students to leverage technology-enhanced approaches that contribute to improved health care outcomes.
These innovations will elevate the learning experience, equipping students with the knowledge and confidence needed to excel in evolving health care roles. At the University of St. Augustine for Health Sciences, we are leading this shift by incorporating AI-powered simulations and chat bots, virtual reality, and remote patient monitoring technologies to enhance training and strengthen clinical readiness.