Emerging Technologies and Societal Challenges
The “Hidden” Curriculum in AI Education
AI education is a process that aims to build a set of competencies that will enable individuals to critically evaluate AI technologies and use them effectively and collaboratively. Future AI curricula should be designed with equity and ethics in mind, culturally sensitive, and centering on the communities impacted by emerging technologies. By assembling diverse, multidisciplinary teams, AI curricula can be designed to reflect the vast range of technological impacts on our lives.
Ora Tanner is the Founder and CEO of Black Unicorn Educationᵀᴹ. She is the Co-Founder of the AI Education Project and a Fellow at the Aspen Tech Policy Hub.
Privacy in the Age of Smart Devices
Privacy in the age of smart devices is a complex issue due to the absence of clear best practices, the rapidly growing technological capabilities, and our expectations of privacy changing over time. In response to these challenges, product designers need to consider the worst-case scenarios in their design and how they’d like to be treated if they were users of their systems.
Jason Hong is a Professor in the School of Computer Science at Carnegie Mellon University. His research draws on ideas and methods from human‐computer interaction, systems, behavioral sciences, and machine learning. His current work centers on smartphones and the emerging Internet of Things.
Financial Technology and Surveillance
Modern financial technology companies offer products that allow for greater financial inclusion, allowing people access without dealing with traditional banking systems. At the same time, the data collection behind fintech is robust, unregulated, and often compromises consumers’ financial integrity. Options for managing the issue include the practice of data minimization, where data collection is limited to a shortlist of permissible purposes.
Raúl Carrillo is the Deputy Director of the Law and Political Economy Project, an Associate Research Scholar, and a Resident Fellow at The Information Society Project at Yale Law School.
From Autonomous Weapons to the Militarization of AI
The lack of an international legal framework around the militarization of AI presents a risk to the global community. Autonomous AI weapons challenge the principle of attributable wrongdoing, as AI is more pervasive and ubiquitous than traditional warfare technologies. The militarization of AI can learn from previous instances of successful global cooperation, such as the Outer Space Treaty and the prohibition of nuclear tests, which are built around the prevention of bodily harm, prevention of conflict, and regulation and prohibition of certain behaviors.
Denise Garcia is an Associate Professor in the College of Social Sciences and Humanities and Institute for Experiential Robotics at Northeastern University. Her research focuses on international law and the questions of lethal robotics and artificial intelligence, global governance of security, and the formation of new international norms and their impact on peace and security.
AI and The Future of Work
AI and data-driven technologies are reshaping workplaces in many ways. While observing workers for managerial purposes is not new, AI-based technologies allow for much greater oversight. AI-based workplace technologies also collect and produce analyses with new forms of data about employees, such as predictions about their productivity. Some argue that new AI-driven managerial tools blur the boundary between employees’ private and work life, for example, by looking at employees’ activity on social media.
Karen Levy is an Associate Professor in the Department of Information Science at Cornell University and an associate member of the faculty of Cornell Law School. She researches how law and technology interact to regulate social life, with a particular focus on social and organizational aspects of surveillance.
From Smart Cities to Smart Enough Cities
Smart cities are often approached through the “tech goggles” perspective, where every problem is seen as solvable through technology, often at the expense of potential harm. An alternative concept of a “smart enough city” suggests that technology needs to be integrated into broader efforts for reform. It needs to address complex problems as well as existing social needs, prioritize innovative policies instead of approaching technology as an end in itself, and promote democratic values.
Ben Green is a Postdoctoral Researcher at the University of Michigan. He studies the social and political impacts of government algorithms, with a focus on algorithmic fairness, smart cities, and the criminal justice system.
AI and Climate Change
AI has the potential to both help and hinder climate change action. Some AI applications help mitigate climate change, for example, via improved power supply and demand forecasting and better climate modeling. Other AI applications increase greenhouse gas emissions, such as AI systems that are used to accelerate fossil fuel extraction. Some AI applications have impacts on climate change that are yet to be quantified, for example, the use of autonomous vehicles. Aligning AI with climate action requires careful consideration of implicit and explicit choices in AI development, multi-stakeholder partnerships, and participatory design.
David Rolnick is an Assistant Professor and Canadian Institute for Advanced Research AI Chair in the School of Computer Science at McGill University. He also serves as co-founder and chair of Climate Change AI and scientific co-director of Sustainability in the Digital Age. His research focuses on applications of machine learning to mitigate and adapt to the climate crisis and mathematical understanding of the properties of neural networks.
Cambridge Analytica's Black Box
The Cambridge Analytica scandal revealed that 50 million data profiles of Facebook users had been analyzed through psychographic profiling tools to target American voters under the guise of academic research. This incident, which resulted in a $5 billion penalty issued to Facebook by the Federal Trade Commission, raises important ethical considerations for the use of social media users’ data. It stresses the need for better oversight of academic research access to users’ data. It also highlights the potential for foreign interference in our democratic processes by impacting users’ decision-making. And finally, it calls attention to individual users’ role in safeguarding their data privacy in the globalized digital economy.
Margaret Hu is a Professor of Law and Director of the Digital Democracy Lab at William & Mary Law School. Her research interests include the intersection of immigration policy, national security, cybersurveillance, and civil rights.