After I used the first Copilot-generated Mixed Emotions image for a LinkedIn article, CoPilot asked, “Would you like to add thunderbolts to this image?” Yes. Yes I would, and I may add thunderbolts to every image I generate from now on! The article below was originally published on LinkedIn on December 17, 2025.
As the conversation about generative artificial intelligence in education continues apace, my colleague Rick Robinson and I remain struck by the contrast drawn between faculty who refuse to embrace genAI by reverting to “old school methods" and those who adopt it enthusiastically. Our conversations with faculty from diverse disciplines at conferences and in hallways reveal a more complicated response than what we often see represented in the academic press. We don’t see opposing camps. Instead, we have discovered that concerns and resistance about using generative artificial intelligence in education develops with familiarity and use. Faculty are often torn, voicing both enthusiasm for genAI’s power, and caution about its implications. Our dialogues with faculty reveal that generative AI is seen as a tool for augmenting teaching and learning, yet one that raises pressing questions about pedagogy, critical literacy, expertise, and the human future.
The Promise of Generative AI
Faculty approach generative AI from the standpoint of learning outcomes. In architecture, for example, AI tools work as part design teams by generating preliminary designs, freeing instructor and student time to focus on the value-added elements of built structures, such as inclusive design. In healthcare education, AI‑generated scripts help nursing students practice delivering negative diagnoses to individuals who do not speak English as their first language, supporting the development of culturally appropriate, empathic communication skills. Creative writing instructors use AI to produce “wild, silly stuff” that can spark imaginative ideas, while computer programming faculty acknowledge genAI’s ability to generate workable code that students can diagnose, critique, and learn from.
To achieve learning outcomes, AI serves as a focal point for fostering engagement in the learning process. Faculty from different disciplines have adopted a similar pedagogical approach: break students into different groups and have these different student groups pose subtly different queries, and then compare the outputs, highlighting the importance of the question/prompt itself, introducing the concept of prompt engineering. Others use AI as an interlocutor who challenges student-generated arguments, prompting students to refine their reasoning. Still others ask students to critique AI outputs so that students recognize that the tool is not an oracle but a complex mathematical system with serious limitations. In these ways, faculty see AI content as raw material that can enhance engagement in the learning process while supporting the achievement of learning outcomes, such as critical thinking.
Persistent Concerns
Adopting and implementing these new learning approaches give rise to deep concerns and reservations. Chief among them is the gap in students’ critical literacy and digital literacy. Many students lack the foundational knowledge necessary to detect AI’s errors or misrepresentations, making it easy to parrot outputs uncritically. There is also an implicit trust in the technology that raises questions about whether students can evaluate information with a discerning eye. When AI can perform some tasks as well as (or better than some) human beings, there is a fear that students will lose an integrative understanding of whole concepts and how concepts within a discipline are connected.
If AI can perform certain tasks once considered foundational, should students still learn them “the traditional way?” Faculty worry that when "traditional methods" is used as a pejorative, embracing untested teaching methods just because they are "innovative" will lead to the decline of judgment, creativity, and disciplinary rigor. Concerns about bias, inequality, and creativity further complicate faculty feeling good about their use. Faculty are attuned to the reality that technologies often exacerbate existing inequities, and they are concerned that using AI will lead to a more unequal world. A preeminent concern is that when genAI is introduced, the focus of the learning process shifts from the disciplinary subject matter to the AI, and the disciplinary learning outcome is often lost or overshadowed.
Another lingering question concerns what is lost in the adoption of these new tools. History offers a glimpse to the possible answer. When calculators were introduced, they made higher levels of mathematics possible and have become ubiquitous, but there has also been a decline in students’ mental math skills. Likewise, when cursive writing was displaced by digital notetaking, what was also displaced was deep learning, memory and comprehension, fine motor skills, and the slower, more deliberate shaping of thoughts. Digital handwriting with styluses on tablets shows promise for combining the benefits of both analog and digital notetaking, but many schools reduced or cut instruction on handwriting, leading to a generational lack of handwriting muscle memory for students who never mastered it. Now that human phone conversations have been displaced, a significant number of Gen Z students dread making phone calls and talking to another human being in real-time.
The question of what is lost is one that might only be answered in the future, but we ignore it at our peril. LLMs can help learners overcome the struggle of the blank page, bypassing the creative process of ideating, but this may not be a positive development when we desperately need new, never before thought of ideas. Faculty who suspect that something significant is lost when students forego the creative process and avoid the creative struggle are intuitively on to something, even if they do not yet have all the data to prove how these tools undermine human learning.
Navigating Augmentation, Not Automation
The faculty we talk to neither refuse to use these tools, nor embrace them. They feel forced to confront them, to stand face-to-face with these artificially intelligent inventions, in part, because society rushes ahead without taking a thoughtful and deliberate approach to the implementation of new technologies in education. Many also worry that genAI will erode authentic assessment, as they see how genAI is being implemented in industry to replace human labour. But with all their concerns, many use genAI in their teaching process, seeing it as a tool for augmentation rather than automation. To draw higher-order learning from students, faculty must account for the abilities of these powerful and evolving tools while simultaneously making students conscious of the compromises being made. Implementing AI in education elevates the importance of judgment, empathy, interpersonal skills, instructional design, and assessment planning.
This was one of the most profound insights we heard: Generative AI tools are amazing, but a human being moving from ignorance to mastery is infinitely more fascinating. GenAI can augment teaching practice by increasing student engagement, but human students still need to learn how to learn, and the erosion of expertise remains, for the faculty we talk to, their biggest concern.