Skip to main content
Loading

Research

Assessing Public Value Failure in Government Adoption of Artificial Intelligence

In the context of rising delegation of administrative discretion to advanced technologies, this study aims to quantitatively assess key public values that may be at risk when governments employ automated decision systems (ADS). Drawing on the public value failure framework coupled with experimental methodology, we address the need to measure and compare the salience of three such values—fairness, transparency, and human responsiveness. Based on a preregistered design, we administer a survey experiment to 1460 American adults inspired by prominent ADS applications in child welfare and criminal justice. The results provide clear causal evidence that certain public value failures associated with artificial intelligence have significant negative impacts on citizens' evaluations of government. We find substantial negative citizen reactions when fairness and transparency are not realized in the implementation of ADS. These results transcend both policy context and political ideology and persist even when respondents are not themselves personally impacted.

The Liar’s Dividend: Can Politicians Use Deepfakes and Fake News to Evade Accountability?

This study addresses the phenomenon of misinformation about misinformation, or politicians “crying wolf” over fake news. Strategic and false allegations that stories are fake news or deepfakes may benefit politicians by helping them maintain support in the face of information damaging to their reputation. We posit that this concept, known as the “liar's dividend,” works through two theoretical channels: by invoking informational uncertainty or by encouraging oppositional rallying of core supporters. To evaluate the implications of the liar's dividend, we use three survey experiments detailing hypothetical politician responses to video or text news stories depicting real politician scandals. We find that allegations of misinformation raise politician support, while potentially undermining trust in media. Moreover, these false claims produce greater dividends for politicians than longstanding alternative responses to scandal, such as remaining silent or apologizing. Finally, false allegations of misinformation pay off less for videos (“deepfakes”) than text stories (“fake news”).

Reason and Passion in Agenda-Setting: Experimental Evidence on State Legislator Engagement with AI Policy

Are narratives as influential in gaining the attention of policymakers as expert information? This pre-registered study uses a field experiment to evaluate legislator responsiveness to policy entrepreneur outreach. In partnership with a leading AI think tank, we send more than 7,300 U.S. state legislators emails about AI policy containing an influence strategy (providing a narrative, expert information, or the organization’s background), along with a prominent issue frame about AI (emphasizing technological competition or ethical implications). To assess policymaker engagement, we measure link clicks to further resources and webinar registration and attendance. Strikingly, given the highly-technical policy domain, we find that narratives are just as effective as expert information in engaging legislators. Further, higher legislative professionalism and lower prior experience with AI are associated with greater legislator engagement with both narratives and expert information. The findings advance efforts to bridge scholarship on policy narratives, policy entrepreneurship, and agenda-setting.

AI Ethics in the Public, Private, and NGO Sectors: A Review of a Global Document Collection

In recent years, numerous public, private, and non-governmental organizations (NGOs) have produced documents addressing the ethical implications of artificial intelligence (AI). These normative documents include principles, frameworks, and policy strategies that articulate the ethical concerns, priorities, and associated strategies of leading organizations and governments around the world. We examined 112 such documents from 25 countries that were produced between 2016 and the middle of 2019. While other studies identified some degree of consensus in such documents, our work highlights meaningful differences across public, private, and NGO sectors. We analyzed each document in terms of how many of 25 ethical topics were covered and the depth of discussion for those topics. As compared to documents from private entities, NGO and public sector documents reflect more ethical breadth in the number of topics covered, are more engaged with law and regulation, and are generated through processes that are more participatory. These findings may reveal differences in underlying beliefs about an organization's responsibilities, the relative importance of relying on experts versus including representatives from the public, and the tension between prosocial and economic goals.

The Impact of Automation and Artificial Intelligence on Worker Well-being

Discourse surrounding the future of work often treats technological substitution of workers as a cause for concern, but complementarity as a good. However, while automation and artificial intelligence may improve productivity or wages for those who remain employed, they may also have mixed or negative impacts on worker well-being. This study considers five hypothetical channels through which automation may impact worker well-being: influencing worker freedom, sense of meaning, cognitive load, external monitoring, and insecurity. We apply a measure of automation risk to a set of 402 occupations to assess whether automation predicts impacts on worker well-being along the dimensions of job satisfaction, stress, health, and insecurity. Findings based on a 2002–2018 dataset from the General Social Survey reveal that workers facing automation risk appear to experience less stress, but also worse health, and minimal or negative impacts on job satisfaction. These impacts are more concentrated on workers facing the highest levels of automation risk. This article encourages new research directions by revealing important heterogeneous effects of technological complementarity. We recommend that firms, policymakers, and researchers not conceive of technological complementarity as a uniform good, and instead direct more attention to mixed well-being impacts of automation and artificial intelligence on workers.

Does Citizen Collaboration Impact Government Service Provision? Evidence from SeeClickFix Requests

Does citizen collaboration affect government performance in service provision?  While prior studies have considered only individual requests of government, I explore collaborative government contacting.  As cities increasingly offer interactive issue reporting options through mobile apps, I investigate whether comments and follows on requests drive faster responses.  I theorize that this input signals issue validity, severity, or scrutiny, helping city officials prioritize requests.  Leveraging a novel dataset of requests from 100 cities, I find that comments and follows double the probability of request closure and that collaborative requests are resolved up to five days faster on average than non-collaborative requests.  By comparing two cities that use the same platform but that differ in the observability of citizen collaboration, I isolate a distinct and significant influence of citizen input on government responsiveness.  The results address the effectiveness of everyday political participation and reveal that collaboration can amplify citizen voices in contacting government.