AI governance, ethics, and policy — how institutions, standards, and public debate shape our life with artificial intelligence.
Daniel Schiff is an Assistant Professor of Technology Policy at Purdue University and founding Co-Director of GRAIL, the Governance and Responsible AI Lab. Before academia, he served as the founding Responsible AI Lead at JP Morgan Chase and Secretary of the IEEE 7010 standard. His research on AI governance, deepfakes, education, and institutional adoption has been published in APSR, Management Science, and AAAI/ACM AIES, and covered in the New York Times, Washington Post, CNN, and MIT Technology Review.
Research Themes
View all →Research on how AI becomes a political and regulatory issue -- agenda-setting dynamics, partisan polarization, international governance frameworks, and the politics of AI policy formation.
33+ outputsResearch on how organizations, standards bodies, and industry practitioners govern AI -- including technical standards, the principles-to-practice gap, and responsible AI in sectors like healthcare.
15+ outputsResearch on political deepfakes, AI-generated content, detection fairness, and governance responses to synthetic media and AI-enabled misinformation.
10+ outputsEmpirical research on how publics understand, perceive, and engage with AI -- including public opinion surveys, democratic participation, and the role of expert information in shaping AI attitudes.
13+ outputsResearch on how AI ethics and literacy are taught, assessed, and institutionalized -- spanning curriculum design, undergraduate computing education, and AI literacy measurement.
22+ outputsResearch on how AI and automation reshape labor markets, job skills, workforce transitions, and the political economy of AI-driven value extraction.
4+ outputsRecent Work
Full archive →Resources
Browse all →AI Governance and Regulatory Archive (AGORA)
Searchable archive of AI-related legislation, regulations, and governance documents from around the world — a GRAIL and Georgetown CSET collaboration.
Explore dataset arrow_forwardPolitical Deepfakes Incidents Database (PDID)
Structured records of political deepfake incidents — incident type, actors, platform context, detection status, and governance response — built for research and journalism.
Explore dataset arrow_forwardAI Survey Hub for Attitudes and Research Exchange (AI SHARE)
Survey instruments, codebooks, and methodological notes for studying public attitudes toward AI — built for cross-context replication and comparative research.
Explore dataset arrow_forwardTeaching and AI Resource Stack
A curated collection of syllabi, readings, case studies, and pedagogical frameworks for teaching AI ethics, governance, and literacy. Includes materials developed for undergraduate computing and policy courses as well as resources suitable for professional development and interdisciplinary instruction.
View standard arrow_forward