Universities across the country are grappling with a fundamental question: how do you prepare students for a workforce increasingly shaped by artificial intelligence while maintaining academic integrity? At the University of North Carolina at Chapel Hill, that challenge falls to Dana Riger, the institution’s inaugural generative artificial intelligence faculty fellow—a role that positions her at the intersection of cutting-edge technology and traditional pedagogy.
Riger, a clinical associate professor in UNC’s School of Education specializing in human development and family science, has spent the past 16 months helping faculty navigate the complex terrain of AI integration in higher education. Since taking on the fellowship role in May 2024, she has become the university’s go-to expert for translating technical AI capabilities into practical classroom applications, conducting 35 custom workshops across nearly every school and college on campus.
Her work reflects a broader shift in higher education, where institutions must balance the revolutionary potential of AI tools with concerns about academic dishonesty, critical thinking skills, and authentic learning. Rather than advocating for wholesale adoption or blanket prohibition, Riger focuses on empowering educators to make informed decisions about when and how to incorporate AI into their teaching methods.
This approach has particular relevance for business leaders and HR professionals, as universities increasingly serve as testing grounds for AI literacy programs that will shape the next generation of workers. The strategies and frameworks being developed in academic settings today will likely influence how organizations approach AI training and integration in professional environments.
Rather than advocating for specific AI policies, Riger focuses on helping faculty develop confidence in their decision-making around artificial intelligence. This approach recognizes that different disciplines and learning objectives may require vastly different AI strategies.
For instance, a creative writing professor might choose to ban AI tools entirely to preserve authentic voice development, while an engineering instructor might integrate AI coding assistants to mirror real-world professional practices. A business ethics course might use AI-generated case studies as teaching materials while prohibiting AI use in student assignments.
“Faculty should have agency and be confident in AI decision-making,” Riger explains. Her role centers on “empowering them to make informed choices,” whether that involves full integration, complete avoidance, or a hybrid approach that selectively incorporates AI tools.
This philosophy extends beyond individual classroom decisions to institutional policy-making. Rather than implementing blanket AI policies, UNC allows departments and individual instructors to develop approaches aligned with their specific learning outcomes and professional standards.
Digital literacy has become a core learning outcome at UNC, reflecting the university’s recognition that graduates will encounter AI systems in virtually every professional field. However, this preparation goes beyond simply teaching students how to use ChatGPT or similar tools.
In healthcare programs, students learn to evaluate AI diagnostic recommendations critically while understanding the technology’s limitations. Business students practice using AI for market research and data analysis while developing skills to verify and contextualize AI-generated insights. Education majors explore how AI tutoring systems work while maintaining focus on human relationship-building in teaching.
“When I think about my responsibility to my students, I think about preparing them to feel confident and competent in whatever professional roles they take on,” Riger notes. This preparation includes understanding AI capabilities, recognizing potential biases, and knowing when human judgment should override algorithmic suggestions.
Importantly, this education also includes learning when not to use AI. Students in fields requiring high levels of human empathy, creative problem-solving, or ethical reasoning learn to identify situations where AI tools might compromise professional standards or authentic human connection.
Riger’s workshop approach demonstrates the importance of context-specific AI training. Rather than offering generic seminars, she researches AI applications within each faculty member’s specific field and provides discipline-relevant examples of AI capabilities and limitations.
For history professors, workshops might focus on using AI for research assistance while maintaining rigorous source verification standards. Chemistry faculty explore AI applications in molecular modeling and research data analysis. Literature instructors examine AI’s creative writing capabilities while developing assignments that emphasize uniquely human analytical skills.
When faculty request custom sessions, Riger asks them to share specific assignments or assessments where they’re experiencing challenges with AI misuse. She then redesigns these materials as case studies, demonstrating how learning objectives can be preserved while either incorporating AI tools or making assignments more resistant to AI shortcuts.
These multiday institutes, including specialized AI assessment workshops, provide deep dives into practical implementation strategies. Faculty leave with concrete tools and frameworks rather than theoretical knowledge, enabling immediate application in their courses.
A consistent theme in Riger’s faculty interactions involves identifying and preserving uniquely human elements of education and research. As AI systems become more capable at information processing and content generation, educators are refocusing on skills that remain distinctively human.
Faculty frequently ask how AI might streamline administrative tasks—grading routine assignments, generating quiz questions, or organizing research materials—to create more time for mentoring, complex problem-solving guidance, and relationship-building with students.
In research contexts, professors explore using AI for literature reviews, data pattern identification, and hypothesis generation while maintaining human oversight for experimental design, ethical considerations, and interpretation of results. This approach allows researchers to accelerate certain processes while preserving critical thinking and creative insight.
Classroom applications focus on using AI to create more engaging learning experiences. Instructors use AI to generate diverse case studies, create interactive simulations, or develop personalized learning materials while ensuring that student-teacher interaction and collaborative learning remain central to the educational experience.
Given the rapid pace of AI development, Riger emphasizes building ethical frameworks that can adapt to technological changes rather than creating rigid rules that quickly become obsolete. This approach acknowledges that specific AI tools and capabilities will continue evolving, but fundamental values around fairness, transparency, and educational integrity should remain constant.
Her framework includes principles like ensuring student understanding of when and how AI is being used in their education, maintaining transparency about AI assistance in grading or feedback, and preserving opportunities for students to demonstrate authentic learning and original thinking.
The concept of “AI resistance” in assignments doesn’t mean making tasks artificially difficult, but rather designing assessments that require skills AI cannot replicate—such as personal reflection, real-world application of knowledge, or integration of lived experience with academic concepts.
Riger also emphasizes the importance of “grace” in this transition period, recognizing that both faculty and students are navigating unprecedented technological change. This includes creating environments where mistakes and learning curves are expected rather than penalized.
UNC’s approach offers valuable insights for organizations developing their own AI training and integration strategies. The emphasis on discipline-specific applications, ethical frameworks, and human-AI collaboration mirrors challenges that businesses face when implementing AI tools across different departments and functions.
The university’s focus on preparing students for professional AI realities also suggests that incoming employees will increasingly expect workplaces to have thoughtful AI policies and training programs. Organizations that proactively develop these capabilities may have advantages in both recruitment and productivity.
As Riger continues expanding her workshop program with ten additional sessions planned for this fall, her work represents a microcosm of higher education’s broader AI transformation. The frameworks and strategies being developed at UNC and similar institutions will likely influence how entire industries approach AI integration, making her role a bellwether for broader societal changes in human-AI collaboration.
The university’s measured, thoughtful approach to AI integration—emphasizing informed choice over mandated adoption—may serve as a model for other institutions grappling with similar challenges. As AI capabilities continue expanding, the principles of transparency, ethical consideration, and focus on uniquely human skills will likely remain relevant across educational and professional contexts.