This area investigates how AI ethics and AI literacy are taught, institutionalized, and measured — spanning undergraduate computing and policy programs, professional development, and public-facing instruction.
Systematic reviews and original empirical studies document the state of AI ethics education: what topics are covered, which pedagogical approaches are used, who is included in or excluded from existing curricula, and what gaps persist between stated learning goals and student outcomes. A consistent finding is that AI ethics instruction remains fragmented and inconsistently integrated — presenting both challenges and opportunities for curriculum reform.
A parallel line of research develops and validates tools for measuring AI literacy — including the AI Literacy Test (AILIT) — to assess what students and practitioners actually understand about how AI systems work, their limitations, and their social implications. This measurement work is designed to be usable across disciplines and institutional contexts, and to support both research on AI education outcomes and practical program evaluation.