This area investigates how organizations, standards bodies, and practitioners operationalize AI governance — moving from abstract principles to specific institutional practices, technical standards, and sector-level accountability mechanisms.
A central thread is the development and analysis of IEEE 7010, a technical standard for assessing the wellbeing impacts of autonomous and intelligent systems. This work bridges normative AI ethics and engineering practice, examining how human values get embedded in standards processes, what it takes to implement standards across diverse organizational contexts, and what the limitations of standards-based governance are. Related work documents the persistent gap between the responsible AI principles organizations publicly endorse and their adoption in actual practice.
A third thread focuses on AI governance in healthcare and clinical settings — where responsible AI stakes are high and institutional accountability structures are well-developed — and on the political economy of AI value extraction: how AI systems generate economic value and how governance choices shape whether that value is shared broadly or concentrated narrowly.