Is FOMO driving your AI strategy?
A panel on artificial intelligence, ethics and trust made a big splash at the recent CPRS Elevate2025 conference in Banff. The session was packed, people listened raptly to the panelists and the audience had multiple questions. Clearly, AI is keeping people in PR up at night.
I most recently worked in the post-secondary sector, and AI is rapidly transforming Canadian higher education, introducing both opportunities and challenges. From generative AI tools like ChatGPT enhancing?/cheapening? student learning to agentic AI systems automating administrative tasks, institutions are embracing these technologies. However, the integration of AI also necessitates thoughtful consideration of ethical, legal, and pedagogical implications. And – here’s an em dash to make you wonder if I wrote this with AI (I did not) – is higher ed barreling towards AI because it’s a good idea or because we’re scared of being left behind?
Generative AI: Enhancing Learning and Teaching
First, let’s get the obvious out of the way: generative AI tools have become integral in Canadian classrooms. Institutions such as Simon Fraser University and McMaster University have implemented AI-powered virtual laboratories and chatbots to facilitate remote learning and provide instant student support. These technologies offer personalized learning experiences, giving students access to resources tailored to their needs. However, a 2024 KPMG survey revealed that while 59% of Canadian students use generative AI for their coursework, two-thirds expressed concerns about reduced knowledge retention and learning quality. Does AI learning really lead to critical thinking?
Agentic AI: Automating Administrative Functions
At the CPRS panel, the key takeaway was that agentic AI was the major up-and-coming technology to prepare for. Agentic AI refers to autonomous systems capable of performing tasks without human intervention. As panelist Dr. Alex Sevigny put it, this is a technology that can perceive, reason, and then act. In higher education, these systems are being explored for automating administrative processes such as admissions, scheduling, and resource allocation. While promising efficiency gains, the deployment of agentic AI raises questions about data privacy, algorithmic bias, and the potential dehumanization of educational services. Institutions need to navigate these concerns to ensure that automation enhances rather than undermines the educational experience.
Policy and Legislative Landscape
Higher ed is going all-in on AI: the rapid adoption of AI in education has outpaced the development of comprehensive policies. A 2023 survey by D2L found that only 13% of Canadian post-secondary institutions had established regulations or guidelines for generative AI use. This gap underscores the need for institutions to proactively develop frameworks that address academic integrity, data privacy, and ethical considerations.
At the federal level, Canada has initiated legislative measures to regulate AI. The Digital Charter Implementation Act (Bill C-27), introduced in 2022, includes the Artificial Intelligence and Data Act (AIDA), aiming to establish a legal framework for AI development and deployment. In 2023, the government released a Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems, providing interim guidance for organizations until AIDA is enacted. And newly-elected Prime Minister Mark Carney has appointed Evan Solomon as the Minister of Artificial Intelligence and Digital Innovation. Of note, Solomon seems intent on accelerating the adoption of AI, with less interest in caution or regulation.
Ontario has taken an important step by passing Bill 194 in November 2024, which amends the Freedom of Information and Protection of Privacy Act (FIPPA) and introduces the Enhancing Digital Security and Trust Act (EDSTA). This legislation establishes requirements for the responsible use of AI systems by public-sector institutions, including school boards, and acknowledges the sensitive nature of data involving minors.
Institutional Responses and Ethical Considerations
Canadian higher education institutions are beginning to develop policies and guidelines to govern AI use. OCAD University, for instance, has issued recommendations emphasizing critical AI literacy, ethical engagement, and transparency in AI tool usage. Similarly, the Université de Montréal has implemented guidelines that require instructors to specify their stance on AI tools in course syllabi and to obtain consent before uploading student materials to AI systems. Others are adding AI strategy advisor roles, generally reporting to the Provost. For some excellent examples of institutional AI policy, check out Arizona State University, Harvard and Yale.
Ethical considerations are central to these developments. Frameworks like the Human-Driven AI in Higher Education (HD-AIHED) model advocate for AI systems that enhance human capabilities rather than replace them, ensuring that AI adoption aligns with ethical standards and promotes equity and transparency.
Next steps?
As I sat in the ballroom, listening to the panel, I was struck not just by the obvious tremendous potential of AI, but by a growing concern about what is driving decisions on AI adoption. As institutions rush to use AI, I am deeply concerned that we are making decisions based on what has been coined AI FOMO–Fear Of Missing Out. That’s not a comforting thought.
If universities and colleges are rushing to implement AI simply because they fear their competitors will get their first, they are not only panicked but they are setting themselves up for poorly planned initiatives, wasted resources, and reputational risk. Here’s what they can do:
Develop a Clear AI Vision and Strategy
Before diving into AI, leadership must develop a clear vision of what AI can (and can’t) do for their organization. A well-defined strategy should outline the goals, the required resources, and the timeline for AI integration.
Foster a Culture of Lifelong Learning
The AI landscape is constantly changing, and so are the skills required to harness its potential. Alleviate AI FOMO by providing training and upskilling opportunities for employees to stay ahead of the latest AI trends and technologies.
Focus on Data Integrity and User Needs
Garbage in…garbage out. AI systems are only as good as the data they are trained on. Institutions must ensure that data integrity is maintained and that AI models are updated with evolving stakeholder needs.
Prioritize Ethical AI Use
As AI becomes more integrated into instituional processes, leaders should prioritize the development of AI that is fair, transparent and respects privacy. This will not only reduce reputational risk but also build trust with students, employees, and community members.
Collaborate and Form Strategic Partnerships
No organization has all the answers when it comes to AI. By collaborating with other higher education institutions, industry partners and government, universities and colleges can share knowledge, resources, and risks – and help spread the cost and effort required to develop AI solutions.
The integration of generative and agentic AI into Canadian higher education presents significant opportunities to enhance learning and administrative efficiency. However, it also calls for the careful examination of ethical, legal, and pedagogical implications–without branding those with concerns as Luddites.
By developing comprehensive policies, fostering critical AI literacy, and ensuring that AI systems are implemented ethically, Canadian post-secondary institutions can harness the benefits of AI without making decisions based on FOMO.