Skip to main content
Loading

Ethical Deployment of Emerging Technologies

AI has assumed a significant role in hiring processes across many industries but can result in discrimination if not developed with care. Discrimination can enter the AI-powered hiring processes in three ways: targeting and optimizing job advertising to specific demographic groups, job recruiters relying on biased data sets, and resume screening algorithms reflecting common human biases. In response to these issues, we can adopt different regulatory approaches, such as seeking informed consent of the job applicant, defining best practices for AI-driven hiring software, and mandatory audits of algorithms used in hiring.

Manish Raghavan is an Assistant Professor in the Sloan School of Management and the Department of Electrical Engineering and Computer Science at MIT. His primary interests are in the application of computational techniques to domains of social concern, including online platforms, algorithmic fairness, and behavioral economics, with a particular focus on the use of algorithmic tools in the hiring pipeline. He is a member of the Artificial Intelligence, Policy, and Practice initiative at Cornell University.

What is the responsibility of the builder in the process of developing new technology products? The question reaches beyond company culture, internal policies, or governmental regulations. Ethical considerations need to be central to the product development lifecycle. The builders of technologies need to raise questions about their potential harm, such as the impact the technology can have on non-users, and the unintended consequences of their product, for example, a social media platform becoming an avenue for misinformation. A shift in approach to consider ethical implications as part of the product development cycle would add value across many areas and industries, ranging from the government through the non-profit sector and academia to consumer tech and consulting.

Kathy Pham is a Fellow and Faculty of Product Management and Society at the Harvard Kennedy School. She is a product leader, computer scientist, and founder who has held roles in product management, software engineering, data science, consulting, and leadership in the private, non-profit, and public sectors. She currently also serves as the Deputy Chief Technology Officer of the Federal Trade Commission in the United States, Senior Advisor at the Mozilla Foundation, and Product Advisor at the United States Digital Service. Her expertise lies at the intersection of technology, ethics, and responsibility, with a focus on ethical principles in practice in product management, design, and engineering.

Digital platform companies use choice architecture to impact the users’ decision-making processes and drive outcomes. Choice architecture can avoid unfair manipulation and preserve the users’ autonomy if it is fully transparent and easy for users to recognize. Both users and platforms can play a role in promoting outcomes that reflect the best interest of consumers and businesses.

Todd Haugh is an Associate Professor in the Department of Business Law and Ethics at the Kelley School of Business, Indiana University. His research focuses on business and behavioral ethics, moral decision-making and critical thinking, sentencing and punishment for economic crime and public corruption, and white collar and corporate crime.

When designing an algorithm, developers take multiple fairness principles into consideration, such as statistical parity, predictive equality, fairness through blindness, and calibration. Still, it is impossible to satisfy these various fairness principles all at the same time. This demands that developers and business leaders must be diligent and ask which principle of fairness will lead to improving people’s lives in each specific use case to create products that serve individuals and society well.

Emma Pierson is an Assistant Professor of Computer Science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a computer science field member at Cornell University. She develops data science and machine learning methods to study inequality and healthcare.

Failures in software can have serious or even fatal consequences. Software failures result from flawed design through a lack of care or knowledge, prioritization of deployment of new products over maintenance, and a focus on the speed with which new products are released over safety concerns. Robust standards of professionalism are needed to manage these issues, such as the core principle to avoid harm, maintaining high standards of professional ethical practices, performing work strictly in one’s areas of competence, and ensuring that the public good is a guiding principle in the design process.

Eugene Spafford is a Professor in the Department of Computer Science and Executive Director Emeritus of The Center for Education and Research in Information Assurance and Security (CERIAS) at Purdue University. Spafford's current primary research interests are in information security, computer crime investigation, and information ethics. He is recognized as one of the senior leaders in the field of computing.

The ethical questions surrounding Natural Language Processing (NLP) pertain to how NLP systems are used, perceived as human rather than machine-generated, and who has access to them. As NLP depends on access to large amounts of publicly available text and massive computational power, it raises concerns about privacy, consent, and sustainability. Large, public data sets are human-generated and contain human bias, which can be reflected in predictions in the NLP models and skew their representativeness. Lastly, NLP prioritizes frequently spoken languages, largely English, which can widen the global digital divide. Ensuring the ethical development of NLP requires human-based interventions. This can be done by asking who will benefit from the system and who might be harmed, whether raw data is representative or reinforces bias, and ensuring that our NPL model training is objective, among other strategies.

Dan Goldwasser is an Associate Professor in the Department of Computer Science at Purdue University. He is broadly interested in connecting natural language with real-world scenarios and using them to guide natural language understanding.

The process of designing an autonomous vehicle reveals that technology design often reflects our larger disagreements over different ethical principles. As in other areas of our lives, the application of ethical principles becomes more complex as technology designers have to weigh desirable features that contradict each other, such as perfect vehicle safety versus affordability to consumers. However, applying ethical principles in technology opens a discussion about competing values and a consideration of the possibility of compromises and trade-offs that lead to reasonable and compassionate solutions.

David Weinberger is an author whose most recent book, the award-winning Everyday Chaos, presents a unique perspective on the rise and importance of machine learning. His work has been published in Wired and Harvard Business Review, as well as in Scientific American, The NY Times, Washington Post, and more. He has given hundreds of keynote speeches around the world, including recent talks on what ethics can learn from AI and the shift in our most ancient strategies for thriving as citizens and businesspeople.