This area focuses on the production, spread, and governance of AI-generated synthetic media — with particular attention to political deepfakes and their implications for democratic communication, institutional trust, and information integrity.
Empirical work includes developing a typology of political deepfake incidents, building the Political Deepfakes Incidents Database (PDID), and analyzing how AI-generated content intersects with existing political misinformation ecosystems. Research examines the actors creating and deploying political deepfakes, the platforms through which they circulate, and how detection and labeling efforts interact with their spread.
On the governance side, this area examines detection technologies, disclosure and labeling requirements, and the fairness dimensions of AI-generated face detection systems — including how detection accuracy varies across demographic groups. A consistent concern is that governance responses to synthetic media risk creating new inequities, particularly for communities underrepresented in AI training data.