Skip to main content
Loading

I am become Dross: Interview with JP Messina

With Oscar season around the corner and Christopher Nolan’s new film, Oppenheimer, likely to take home awards, we sat down with Dr. JP Messina, assistant professor of philosophy, to discuss how the tech ethics that dominated the 2023 news cycle (mainly surrounding the ethical use of artificial intelligence) compare to the events depicted in the biopic.

Do you think a fair comparison can be drawn between the advent of thermonuclear weaponry (and, to a lesser extent, the proxy innovations that derived from that technology) and the current AI revolution? Why or why not?

I don’t know that there’s a context-neutral answer to this question. Comparisons between technologies are usually trying to make a point, and the comparison’s aptness to making that point determines whether the comparison is fair. If the question is: “Is there a point that someone could have that would be felicitously made by bringing AI and thermonuclear weaponry into comparison,” the answer is clearly yes. Similarly: if the question is, “is there a point that someone could have that would be infelicitously made by bringing AI and thermonuclear weaponry into comparison, the answer is, again, clearly yes. Consider two idealized cases.

  1. Thermonuclear technology radically changed warfare and the provision of energy, sometimes with vast human costs, and did so in ways that were foreseeable, sometimes foreseen (or even aimed at), by those developing them. Those developing AI technologies foresee radical changes to human civilization, some of which could impose vast costs on our societies. Many of us regret the development of thermonuclear technologies, or at least wonder whether we pushed too far in developing them. So we should think hard about whether to continue to press forward with AI technologies.
  2. Thermonuclear technology radically changed warfare and the provision of energy, and its creator(s) regretted their involvement. So it’s clear that the risks of AI will similarly swamp its benefits and we need to shut down research into these technologies immediately. 

Whereas 1 is a perfectly appropriate call to reflection, 2 is sloppy. Not only is it uncertain whether the creator’s regrets in the nuclear technology case are well-placed (both nuclear energy and the possible truth of the Nuclear Peace Thesis make room for doubt), but we aren’t encouraged by the comparison to think well about any differences there might be between AI and thermonuclear technologies. Additionally, 2 privileges the moral sentiments of creators, but they might be particularly prone to regret given their sense of their own accountability and the nature of the costs. By contrast, 1 treats creators and the rest of us as being on a par, in terms of the quality of our moral sensibilities. In sum: 1 is an invitation to clear thinking; 2 is an invitation to panic and poor decision-making. 

Of course, there are many other points in the service of which one might want to draw the two technological moments into comparison. Leaving aside aesthetic reasons, most of those deploying these comparisons will be out to convince their listeners of something. They’ll be arguing by analogy, in short. And those of us who teach critical thinking understand that the strength of analogical arguments depends upon two factors: (1) the presence of relevant similarities between the things being brought into comparison and (1) the absence of relevant dissimilarities between the two. Knowing this can help us differentiate between panic-inciting and thought-inducing comparisons. 

From your perspective, where is AI technology going and what are your predictions for it in the coming years/decades?

I think there is a good chance that AI technologists are likely to be wrong when they speculate in response to this question, and they are in a better position to be right about it than I am. So I suspect that my weighing in on this question is worth less than nothing. 

What do you think are the big ethical questions we should be asking about AI technology? 

Lots of the ones that we’re already asking: questions about the future of work (and meaningful work); questions about automation’s potential use (and misuse) in warfare, questions about how to deal with potentially higher than average rates of replacement in professions (and who bears the costs for retraining); questions about extinction and catastrophe; questions about moral status and consciousness; questions about the value of AI created art and literature and its debt to human creators; questions about the potential of and proper response to existential risk; questions about bias and fairness in distributive domains; questions about free speech and censorship; questions about accountability when things go wrong; questions about integrity and the value of education; questions about privacy; questions about security (national and personal); questions about transparency; questions about public health and other improvements to human well-being; questions about identity verification, authenticity, disinformation, forgery, etc. etc. etc. etc.

"Groves: Are you saying that there's a chance that when we push that button... we destroy the world?
Oppenheimer: The chances are near zero...
Groves: Near zero?
Oppenheimer: What do you want from theory alone?
Groves: Zero would be nice!"

Interaction between Officer Leslie Groves and J. Robert Oppenheimer from the 2023 film Oppenheimer.

I don’t think, in short, that we’re asking the wrong questions. But we often fail to consider tradeoffs between the very many things we care about. For example, suppose we can have a diagnostic tool that is optimally reliable but not transparent, how willing should we be to trade the one value against the other? We often fail to do good, careful comparisons between the status quo before technological change and the status quo after it, holding the latter to a standard that the former couldn’t survive either. For example, before the advent of credit scores, people seeking loans were evaluated on the basis of all sorts of factors (e.g., place of residence, race, sex, etc.). The new way is biased, but so was the old way. Which is actually better? We often fail to consider the strategic context in which the development of technologies is embedded. For example, suppose that something is risky or dangerous. We can control our (our firm’s, our nation’s) development of the technology, but we cannot effectively control the behavior of others. How does this fact change what we ought to do? We are not often good at distinguishing between circumstances of risk and circumstances of uncertainty. Whereas risk invites the comparison of strategies based on the product of their outcomes’ known utilities and their known probabilities, circumstances of uncertainty are those in which the probabilities or some outcomes are unknown. When we face uncertainty, the principle to maximize expected utility (beset with well-known ethical problems under the tractable situation of  risk) seems to break down entirely. What principle should we use instead? We are often bad at anticipating the mismatch between the intent behind regulatory proposals and their actual effects. For example, policies designed to protect consumers often harm them by crushing competitors and favoring incumbent firms, or impeding the development of better technology. How can we avoid these unfortunate cases of unforeseen harm?

Better answering the questions we’re already asking by correcting these common blindspots is, in my estimation, our best chance at getting things right. Even then, though, we might fall badly short. New questions will arise as we gain experience with new technologies; problems and the precise shape are often hard to see before they are deployed, and even careful testing is imperfect.

What are your major concerns and hopes for AI technology currently and in the future? 

I think people have made a credible case that two distant possibilities are worth taking seriously: that AI technology brings us much closer to utopia (generating massive wealth and peace affording us respite from the least meaningful kinds of work) and that it brings us much closer to dystopia (by courting extinction or servitude or mass social uprising or dangerous war). There is wisdom in the maxim that it is better to be overly optimistic and wrong than overly pessimistic and right, I guess. But one could also avoid excess altogether. Myself, I suspect that there will be some surprises and hiccups and economic realignment, as there often is after technological shifts, but that we will learn to adjust and things will be less different than the anxious or hopeful among us imagine. I hope for the best (and worry about the worst), but I expect (in line with history) that we’ll muddle through as we tend to do, with a general and non-linear tendency to get better over time. As above, though, I am neither an oracle nor a technologist. 

Are there any common fears and anxieties that you feel are unfounded? Anything people don’t seem to be concerned about that they should be aware of?

A few years ago, it came to light that my grandmother was nervous about using the self-cleaning function on her oven; she feared that doing so might start a fire. She was told her fears were unfounded. She listened. 

Her kitchen caught fire with the first use.

What’s the lesson? 

Attempt 1: don’t tell people their fears and anxieties are unfounded because you might be wrong and then you’ll be to blame.

But of course, my grandmother’s fears were unfounded. She was afraid of a technology, which was, by and large, safe. And technologies, even those that are quite safe, can cause problems. Sometimes the result is a catastrophe.

Attempt 2: Sometimes low-probability events eventuate, bringing with them the bad outcomes that we were able to anticipate. When they do, those that pointed out their low likelihoods need never feel guilty.

But of course, sometimes our estimation of the likelihood of an event is just a guess, informed by our own risk tolerances and total life experiences. Sometimes further inquiry will allow us to uncover something like a reliable estimate of a far-off future event’s probability. But other times it won’t. 

Attempt 3: When people are worried, listen. When you can, help them think clearly about expected value by bringing reliable probability estimates to bear and help them think about insuring against risk. When you can’t, help them think about what to do in contexts of uncertainty. 

In short, no: I don’t think anxieties are unfounded because of all of the unknowns. There are many anxieties that we might have if we had a better view of the possibility space. But rarely would we do better in history by allowing our anxieties to dictate the course of our actions. Nuclear technologies remain a question-mark. But we seem to be muddling through. 

"They won't fear it until they understand it. And they won't understand it until they've used it. Theory will take you only so far."

J. Robert Oppenheimer in the 2023 film, Oppenheimer

While the far-off implications of AI technology are fun to think about, the major short-term complication in my view will be how to distinguish between fact and fiction, especially during the upcoming election. AI is improving rapidly and it is easier than ever to impersonate someone or make it out that they said or did something that they did not say or do. This is to the benefit of ideologues and fraudsters and I hope that we’ll spend a good amount of time in the coming months thinking about how to strengthen our identity and authenticity systems against potential breaches – more time than thinking about existential risk and the advent of Artificial General Intelligence. It’s good news, in my view, that many of the big actors in the AI space are taking steps to explicitly disclose or mark AI-generated content: https://www.reuters.com/technology/openai-google-others-pledge-watermark-ai-content-safety-white-house-2023-07-21/ 

Whether their voluntary actions will suffice or bad actors will be able to find ways of removing the watermarks is an interesting question that history will answer soon enough.

Why are these questions important to ask? How does an education in Philosophy better prepare students, scholars, and scientists to approach major cultural moments like this? 

There will be others that come after us and they will inherit the world that we build, as we have inherited the world built by those that came before us. Many of us hope (or ought to hope) to leave the place at least a little better than we found it. Failing to ask questions about the kind of future our actions and institutions are likely to generate leaves this hope in the hands of chance, even if chance will play an outsized role regardless of how we think and what we do.

Karl Popper once wrote that “all men and all women are philosophers.” He meant, I guess, that each of us takes a stand on philosophical questions, more or less explicitly. I think he was right about that. To me, the value of an education in philosophy is to make explicit the various stands that we take on philosophical matters in the natural course of our lives, stands that we take often without knowing that we’re taking them. Doing so helps us understand what we value and what others value and have valued and why. Such self-understanding can help us reflect on our individual and collective values, rather than merely taking them for granted. Bertrand Russell observes that this kind of critical reflection frees us “tyranny of custom,” and from the prejudices of our upbringings. It can help us become more consistent thinkers and valuers. And it can enrich our imaginations as to the ways that the world might be. This is all crucial when we are thinking about the kind of future we want to play a part in realizing. After all, the answer is in-part determined by what we value and ought to value. That’s already a big payoff. But it would be doing a disservice to philosophy to pretend that its usefulness stops there. Philosophers have also developed a sophisticated toolkit for evaluating arguments, for thinking about risk and uncertainty, and for thinking about how to evaluate changes in our social worlds. These tools are important for distinguishing between good arguments for doing and forbearing, on the one hand, and unhelpful (though often powerful) rhetorical ploys to manipulate us into doing things that we may have no reason to do, on the other. The tools in this kit can also help those in the positive sciences employ their own distinctive methods to more normatively attractive ends. If we want to respond well to the challenges that we face, this kind of cross-disciplinarity is indispensable.

What are you currently teaching? What kinds of classes can students take with you? What have you been working on lately on the research side?

This Spring, I’m teaching two sections of SCLA 102 (Cornerstone for Business). I also regularly teach PHIL 208,  the ethics of data science, PHIL 219, philosophy and the meaning of life, and  various classes in political philosophy (and its history).  As for research, I am pleased to report that my first book, Private Censorship, has just been published with Oxford University Press.  In addition to my work on free speech, I write on issues of property, political legitimacy, and deservingness.  As core faculty in Purdue’s Governing Responsible AI Lab (GRAIL), I spend time thinking about misinformation, AI ethics, and regulatory approaches to emerging technologies. If thinking about these kinds of issues  sounds interesting to you, look out for programming and opportunities to get involved.

Read More

Faculty

College News Home