This area examines how ordinary people understand, evaluate, and respond to artificial intelligence — an empirical question with significant implications for democratic legitimacy and AI governance design.
Survey-based research explores what the public knows about AI, which applications they support or oppose, and what factors shape trust and concern. A key focus is on how information environments — including expert framing, narrative cues, and partisan signals — structure public opinion on AI issues. This includes work on agenda-setting dynamics: how media coverage and elite discourse influence what aspects of AI the public treats as important or worrying.
A broader normative thread asks what meaningful public participation in AI governance looks like — not just public opinion as a signal to policymakers, but institutions for deliberation, oversight, and democratic accountability in a domain experts often treat as purely technical. Related work examines AI in the context of civil institutions, including how public trust in AI intersects with trust in the organizations deploying it.