Teaching in the Age of AI

Olaf Maennel

What is a university for, when AI can pass our exams, write our code, and generate our essays? Generative AI has disrupted every assumption about knowledge transfer in higher education. The question facing educators is no longer “how do we deliver content efficiently?” — it is “what do we develop in our graduates that AI cannot replace?”

This is a personal reflection grounded in over twenty years of teaching computer science and cyber security across four countries — Germany, the United Kingdom, Estonia, and Australia — and in experiences ranging from building NATO cyber defence exercises to designing system-of-systems cyber ranges. It is not an abstract position paper. It is shaped by what I have seen work, what I have seen fail, and what I believe education must become.

The Digital Divide Redefined

“The digital divide is no longer about access to technology. It is about the gap between people who master AI and people who are mastered by it.”

The first digital divide was about infrastructure access. The second was about digital literacy. The third — the one we face now — is about what I would call cognitive sovereignty: the ability to think independently in a world saturated with AI-generated outputs. A graduate who merely relays AI output adds no value — their employer does not need them if AI can do the job alone. But a graduate who works with AI, who recognises its limitations, who detects when it is wrong, who understands that models can be poisoned or biased, and who applies critical judgment to what AI produces — that graduate will be indispensable.

Clare Graves’ developmental theory (Graves, C. W. (1970). “Levels of Existence: an Open System Theory of Values.” Journal of Humanistic Psychology, 10(2), 131–155; popularised by Beck & Cowan (1996) as Spiral Dynamics) provides a lens for understanding this divide. It maps how individuals and societies engage with complexity at different developmental levels — and today, with AI.

Beige Purple — At the survival and tribal levels, populations are completely overwhelmed by AI — no capacity to evaluate, no framework to resist. These are the most vulnerable to AI-driven manipulation.

Red — At the power-driven level, AI is deployed at all costs for competitive or military advantage. No ethical reflection. Face recognition that works eighty percent of the time — but not a hundred — arrests the wrong people. Targeting systems select the wrong targets. Speed and power over accuracy and ethics.

Blue — At the rules-and-order level, we see calls for AI safety regulation, frameworks, compliance. Necessary — but rules without understanding produce rigid compliance, not genuine critical thinking.

Orange — At the achievement level — where most countries and businesses operate today — AI is a competitive advantage, an efficiency multiplier. One software engineer can now do the work of twenty. But what happens to the other nineteen? This is the economic dimension of the digital divide. Our graduates must be the one who keeps the job. But is that sufficient?

Green — At the community level, questions of sustainability emerge. Data centres consume immense power — is it from clean sources? Is this technology sustainable for a healthy society, for human bonds? But this level also sees AI for scientific good: protein structure prediction, drug discovery, climate modelling.

Yellow Turquoise — At the integrative levels, we ask: can AI enable humanity to reach a higher developmental stage — find cures for diseases, solve the climate crisis, transcend current limitations? Or will we remain trapped at red and orange, wielding powerful tools with primitive values?

The educational imperative follows from this analysis: we need graduates who can operate at the achievement level at minimum — mastering AI as a tool for their profession. But education must aspire to develop community-oriented and integrative thinking: graduates who understand systemic impacts, who consider sustainability, who can evaluate AI not just technically but ethically and societally.

The uncomfortable finding is this. Research demonstrates that merely knowing about AI risks is insufficient protection. Fazio, L. K. et al. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General, 144(5), 993–1002 showed that awareness of the illusory truth effect does not protect against it — people fail to deploy the knowledge they possess. More recently, Lin, H. et al. (2025). Persuading voters using human-artificial intelligence dialogues. Nature, 648, 394–401 found that people who knew they were interacting with AI were still persuaded by it. This has sobering implications for educators: you cannot simply lecture about AI risks and expect graduates to be safe. Critical capacity must be developed through practice, not transmitted through information.

What Graduates Need

If traditional lectures never developed higher-order thinking well, then the AI disruption is both crisis and opportunity. The crisis is that AI can now produce passable work at the lower cognitive levels — recall, comprehension, application — that much of our assessment has historically targeted. The opportunity is that this forces us to focus on what always mattered most: the capabilities that define genuine expertise.

Four capabilities stand out as essential for graduates in the age of AI:

  1. Systems thinking across interdependent, complex infrastructure — understanding how components interact, how failures cascade, and how interventions in one part of a system produce consequences elsewhere.
  2. Critical evaluation of AI-generated outputs — knowing when AI is wrong, recognising hallucination and bias, and understanding how models can be poisoned or manipulated.
  3. The ability to design and architect complex systems, not merely use them — creating rather than consuming, building rather than configuring.
  4. Judgment born from experience — the kind of intuition that comes only from hands-on engagement with realistic, messy problems where there is no single correct answer.

These correspond to the higher levels of Bloom’s revised taxonomy — analysis, evaluation, creation (Bloom, B. S. (1984). The 2 Sigma Problem. Educational Researcher, 13(6), 4–16; Anderson, L. W. & Krathwohl, D. R. (Eds.) (2001). A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Longman). They cannot be taught through lectures or textbooks. They require experience — and experience requires environments where students can safely experiment, fail, and learn. Biggs, J. & Tang, C. (2011). Teaching for Quality Learning at University (4th ed.). Open University Press reminds us through the constructive alignment framework that what students do determines what they learn, not what we tell them.

Teaching With AI, Not Against It

AI has broken traditional assessment. Essays can be generated, code produced, analyses fabricated. Banning AI is neither realistic nor desirable — graduates who cannot work effectively with AI tools will be professionally disadvantaged.

The aspiration is a world where AI is fully permitted in all learning activities. But we are not there yet. If students learn to depend on AI before they develop independent critical thinking, we do not educate them — we train them to be mastered. This is not a hypothetical concern: adversarial AI — model poisoning, adversarial concept drift, targeted influence operations against human operators — represents an active area of research and a real-world threat.

The pragmatic approach I advocate today:

The aspiration is assessment where AI is always permitted but the process of thinking is visible — through portfolios, reflective journals, live system design, oral examination of reasoning. The CyberLab naturally enables this: when students build and operate real infrastructure over weeks and months, their competence is demonstrated through the system they create, not through any single written artefact. (See also Mollick, E. R. & Mollick, L. (2023). Using AI to Implement Effective Teaching Strategies in Classrooms. Wharton School Working Paper.)

Learning Through Building

The most effective learning happens when students build things that matter — not disposable assignments graded and forgotten, but systems that persist, that others depend on, and that create real consequences for getting things wrong. This aligns with Kolb’s experiential learning cycle (Kolb, D. A. (1984). Experiential Learning. Prentice Hall) and Wenger’s concept of learning through participation in communities of practice (Wenger, E. (1998). Communities of Practice. Cambridge University Press).

The Open-Source Education Model

The CyberLab at Adelaide University is a system-of-systems cyber range where students co-design and operate a digital twin of interconnected critical infrastructure. The educational philosophy is inspired by the open-source development model: just as university students played a crucial role in Linux development — earning credit while producing code that benefits the broader community — CyberLab students build systems that become part of real, operational infrastructure. Their work persists for the next cohort to build upon. This is not a disposable assignment.

This is also structured as work-integrated learning — an “internal internship” accessible to all students, particularly valuable for international students who face barriers to external placements in cyber security (Jackson, D. (2015). Employability skill development in work-integrated learning: Barriers and best practice. Studies in Higher Education, 40(2), 350–367). It develops human, social, cultural, and identity capital (Clarke, M. (2018). Rethinking graduate employability. Studies in Higher Education, 43(11), 1923–1937).

More than 50 students have earned academic credit through the CyberLab via internships, honours projects, and master’s thesis work.

The Flight Simulator — Exercises and Cyber Ranges

A flight simulator lets pilots practise emergencies they hope never to face in the air. Cyber ranges serve the same purpose for defenders. I have participated in the NATO CCDCOE Locked Shields exercise — the world’s largest international live-fire cyber defence exercise — for over five years as both red-team and green-team member, and contributed to the Crossed Swords exercise. These experiences directly shaped the CyberLab vision: realistic, high-pressure scenarios are the gold standard for developing cyber defenders.

The CyberLab extends this by making it possible to deploy, run, and reproduce complex multi-system scenarios entirely through code — bringing the exercise methodology into the university curriculum. In October 2026, I co-chair Dagstuhl Seminar 26422 on “Cyber Security Experimentation Beyond Exercises and Cyber Ranges,” exploring how to apply FAIR principles to simulation infrastructures and how AI is transforming cyber security experimentation.

Hands-On Labs — The Tactile Principle

Some things must be taught with physical equipment. At Loughborough University, the Networks Lab (COP502) gave student groups three physical routers, three laptops, and cables — and asked them to build and operate their own ISP from scratch. If students can touch a physical cable, it becomes much easier to explain what a subnet is and how forwarding works. When they do not design redundant networks, we switch off a router or unplug a link — then discuss what they could have done better. Students consistently reported this was “fun and interesting” and wished they had more time. The progressive difficulty — guided exercises first, then increasingly open-ended challenges — embodies Wood, Bruner and Ross’s foundational concept of scaffolding (Wood, D., Bruner, J. S. & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, 17(2), 89–100).

At TalTech, the Network Protocol Design course (ITC8061) asked students to design and implement a distributed chat protocol over UDP — targeting the highest levels of Bloom’s taxonomy. Students wrote over 2,000 lines of code, negotiated interoperability as a class, and experienced the dynamics of Internet standardisation first-hand.

International Research Collaboration

The Special Topics of Cyber Security course (ITC9010/ITC9020) created an international research collaboration between TalTech, Adelaide University, and Hochschule Ravensburg-Weingarten. PhD students formed cross-university teams, bootstrapped research projects during a face-to-face bootcamp, and maintained remote collaboration throughout the year — mirroring real academic practice. The course produced genuine research outputs: conference abstracts, posters, and paper drafts. For more details, see the course website.

Intensive Summer Schools and Professional Training

The Cyber Security Summer School (C3S) at TalTech (2015–2020) brought together 45–70 PhD students with world-leading speakers for intensive, themed weeks — from information security (with Steve Bellovin, Vern Paxson, Jon Crowcroft) to social engineering (with Ralph Echemendia) to maritime cyber security. These created concentrated learning experiences that no regular semester course can replicate.

In aviation, I developed and delivered EASA-compliant cybersecurity training for air traffic safety electronics personnel in collaboration with NATO CCDCOE and the Estonian Aviation Academy (CNS.073, 2020–2023) — demonstrating that the same pedagogical principles apply to professional training in regulated industries.

Scaffolding for Diverse Learners

Cyber security classrooms contain students with wildly different backgrounds. Imagine an MSc-level course on hardening operating systems attended simultaneously by a system administrator with ten years of experience and a graduate from an IT-law programme who has never opened a terminal. With traditional teaching methods, it is impossible to serve both: the administrator is fundamentally bored while the lawyer is lost after the first lecture.

Bloom’s 2-sigma insight (Bloom, 1984) points the way: students subject to one-to-one tutoring perform two standard deviations above those in conventional instruction. We cannot provide individual tutors for hundreds of students, but we can design learning environments that approximate this — through scaffolded progression where advanced students face harder challenges (cyber attacks on their infrastructure, for instance) while others learn at their own pace. Flipped classroom methods let students cover prerequisite material asynchronously, freeing class time for the discourse and hands-on work that creates genuine understanding. Vygotsky’s zone of proximal development (Vygotsky, L. S. (1978). Mind in Society. Harvard University Press) reminds us that learning happens at the boundary of what a student can do alone and what they can achieve with guidance — and that boundary differs for every individual.

The Educator’s Responsibility

In an age of AI, the educator’s role shifts fundamentally — from content delivery to developing critical, independent thinkers who can evaluate what AI produces, understand its limitations, and make sound judgments under uncertainty. The quality of an institution will always come from inspired teachers who create environments for genuine intellectual growth, not from the volume of information transmitted or the sophistication of the tools deployed.

This is both the oldest idea in education — Socrates taught through dialogue, not through lectures — and the most urgent challenge of our time. If we get this right, AI becomes humanity’s most powerful instrument for progress, enabling the integrative and systemic thinking that Graves described at the highest developmental levels. If we get it wrong, we produce a generation mastered by tools they cannot evaluate, deploying power without wisdom, trapped at the lowest levels of development. The stakes are not merely professional. They are civilisational.


Course History

Since April 2012 I am a Fellow of the UK Higher Education Academy (HEA), having completed the HEA accredited New Lecturers’ Course for academic faculty in the UK.

Adelaide University

  • COMP SCI 3004 & COMP SCI 7064 Operating Systems
    (492 students, 3 units, Spring 2023)
  • COMP SCI 3307 & COMP SCI 7307 Secure Programming
    (200+ students, 3 units, Spring 2023, Spring 2024 and in trimesters)

Tallinn University of Technology

MSc-level courses

  • ITC8060 & ITC8061 Network Protocol Design
    (Spring 2014/15 – Spring 2019/20)
  • ITX8040 Cyberdefence Seminar
    (Spring 2014/15, Autumn 2015/16)
  • ITX8230 Digital Forensics Seminar
    (Spring 2014/15, Autumn 2015/16)
  • ITX8512 Practical Training
    (since Spring 2014)
  • ITC8070 Information Systems Attacks and Defence
    (Autumn 2016/17 – Autumn 2019/20)
  • ITC8210 Human Aspects of Cyber Security
    (Autumn 2021–2023)
  • CNS.073 Cybersecurity in Aviation
    (with Estonian Aviation Academy & NATO CCDCOE, Spring 2020–2023)

PhD-level courses

  • IXX9601 Doctoral Seminar I, II & III
    (Autumn 2014/15, Autumn 2015/16, Spring 2016/17)
  • ITC9010 Special Topics of Cyber Security I
    (Spring 2015/16 – Spring 2018/19)course website
  • ITC9020 Special Topics of Cyber Security II
    (Autumn 2016/17 – Autumn 2017/18)course website

Cyber Security Summer School (C3S)

  • C3S 2018: Maritime Cyber Security
    (11–15 June 2018)
  • C3S 2017: Social Engineering Capture The Flag
    (10–14 July 2017)
  • C3S 2016: Digital Forensics: Technology and Law
    (3–8 July 2016)
  • C3S 2015: Information Security
    (13–17 July 2015)

Loughborough University

Undergraduate courses

  • COF181 Introduction to Programming II
    (2012/13)
  • COC190 Advanced Networking
    (2012/13, 2013/14)

MSc-level courses

  • COP455 Network Systems
    (2009/10 – 2013/14)
  • COP502 Networks Lab / Building Secure Networks
    (2009/10 – 2013/14)
  • COP532 Internet Architectures / Internet Protocols
    (2010/11 – 2013/14)