The ethical consequences of academic research in terms of its capacity to induce insecurity is an old topic of discussion that has affected various disciplines in different ways. Generally speaking, the more abstract the output of a given discipline, the less frequently it comes up in debates about what limits academics should put on their research activities.
Let’s start with the natural sciences. Ethics are such a huge topic in the highly empirical fields of medicine and biology that, when Europe says “Ethics Committee” they mean "medical ethics", and there are entire NGOs devoted to the ethics of biological research. For their potential to inform weapons development, chemists and physicists are also generally fairly aware of the potential ethical consequences of their work in terms of security. Einstein famously regretted encouraging the American government to pursue atomic weapons research, and Robert Oppenheimer, who led that research in the Manhattan Project, could hardly bear to remember his role in creating the most destructive of weapons.
[youtube 26YLehuMydo]
Ethics tip for dummies: become not Death, destroyer of worlds
Social science, perhaps because it deals directly with people as research objects rather than things or non-human organisms, is probably more sensitive in general to ethical and security questions. There is a long-standing debate in IR and political science generally about whether morally neutral social scientific research is even possible, and if not, what principles ought to guide ethically prudent scholarship. Economists typically freeze as soon as the discussion touches what they like to call "distributional questions", and anthropologists have long been concerned with their own version of Star Trek's Prime Directive. The question is basically to what extent studying cultures remote from the modern world of graduate students and citation indexes are necessarily affected by anthropologists coming to study them, and what the rationale for such interference might be.
Although there was some discussion during the Cold War about the implication of political science in targeting decisions and to make Vietnam's ugly instability into a full-scale internationalized war, the ethics of social science in security questions have reemerged as an especially salient question in the last decade and a half. A big impetus for this was the "Human Terrain System", a project initiated by the Pentagon to compensate for their ignorance about the dynamics of certain societies, like Afghan "tribes", which often refers to people who live far away and neither have nor want Apple products, and other non-combatant groups who might not see the US Army as liberators in every instance. The American Anthropological Association declared the HTS to contravene their code of ethics, and the ever insightfully wacky James Der Derian made a semi-famous documentary about it.
[youtube 2ZOYjok4BPs]
Social science & security ethics goes Hollywood
A couple of disciplines have had a free pass, though, when it comes to analysing their security ethics, and they tend to be the most abstract, like metaphysics and math. Very few would consider the question of whether Being as such is separable from beings (Das Sein or Da-sein? for you Heideggerians) to be really relevant to security ethics, so metaphysics gets a pass. Math is equally abstract, if not more so, because numbers are entirely abstract without 'real' being, and to do really interesting math, you have to abstract away from 'real' numbers anyway. This has allowed math as a discipline to escape questions of ethics and security. But no more! The NewScientist recently published an opinion piece arguing that SigInt agencies have become popular and willing employers for mathematicians, but also that these institutions conduct programs that are frequently illegal and almost always ethically questionable. The security aspect is, naturally, that the justification for the ethically dubious programs is to protect a given western society, or western civilisation in general, from the scourge of terrorism - a topic discussed extensively on this blog and elsewhere. As the author describes the problem:
...Mathematics clearly has practical applications that are highly relevant to the modern world, not least internet encryption.
Our work, then, can be used for both good and ill. Unfortunately for us, it is the latter that is in the public eye. Already unpopular for our role in the banking crash, we now have our largest employer running a system of whole-population surveillance that even a judge appointed by George W. Bush called "almost Orwellian".
So mathematicians must decide: do we cooperate with the intelligence services or not?
Mathematicians seem unsure of how to deal with their ethical responsibility. Tom Leinster, author of the NewScientist piece, suggests that the problem should be discussed at a minimum. He also mentions a suggestion by a University of Chicago professor to treat collaborating mathematicians just as KGB informers were treated in the Soviet Union: social pariahs derided by others in their communities.
This is an interesting spectrum of how to deal with collaborators. It is doubtful whether a mere discussion would yield any real consensus. The discipline of IR also has many members conducting what a colleague of mine illustratively calls 'Darth Vader research', i.e. research in service of the empire and with planet-endangering weapons as pet projects. There is debate about this sort of thing, but the debate happens mostly within the respective camps, with the more peace and emancipation-oriented academics discussing just how evil the others are among themselves, and the power and survival-oriented scholars discussing among themselves just how naive and irrelevant the others are. It might be possible to develop codes of conduct within each camp, but there seems to be no common denominator on which they could all agree.
The suggestion of blacklisting the collaborators also seems a little shortsighted. After all, terrorism does exist, and terrorists are, by definition, violent sorts who don't care much for legal recourse. What are they likely to do if they can easily google the names of those who know about the tools used to spy on them, the encryption protocols of those who store information about them, or simply the names of people whose death would serve propaganda purposes? The possibilities start with blackmail, run through torture and end with beheadings. Ratting out your fellow citizens might be dirty and distasteful, but it's not necessarily deserving of slow and deliberate fingernail removal. In the Soviet Union, the KGB was around to protect its own informants, at least as long as they were useful. Who's going to protect mathematicians over the course of their careers for poor decisions made to finance a graduate degree? To the extent that punishment or ostracism is called for, it should be proportional to the crime, no?
Even developing a code of conduct is tricky. Academic discourse requires a culture of debate and the freedom to propound contrarian, unpopular or contextually dangerous ideas, and this is something that most scholars could agree on regardless of whether they see their products as interventions in narratives or objective knowledge. While one could argue that there's a difference between working for institutions that violate legal rules and moral principles and just producing research that could be instrumentalized by them, it's not clear that a more creative scholar who comes up with a very original idea is less implicated in the ethical breech than a less creative one who just translates it for the institution and makes it handily applicable. In that sense, a code of conduct is a kind of soft censorship, and though there are times when censorship might be justified in extreme cases like incitement to violence, it's also poison to academic discourse.
I'm not sure how mathematicians should deal with colleagues who sell out everyone's rights, but the fact that mathematicians are now worrying about this sort of thing does reinforce an intuition of mine about social science research in general and IR in particular: there is no ethically neutral research, and this has something to do with security. Every fact, every theory and every narrative can be instrumentalised by some political worldview or other, and every such worldview seems to privilege some over others and renders institutions based on other worldviews less secure. To take a famous example, the discourse of the democratic peace, which is more of an empirical regularity than a theory, was taken by unenlightened colleagues as a reason to forcibly impose democratic institutions on non-western peoples. If mathematic discoveries about assessing the randomness of strings and IR discoveries about the frequency and severity of conflict by regime type can be used to oppress people, what hope do less anodyne and more theoretical statements in the social sciences have of avoiding these kinds of ethical dangers, and what defence will there be in a culture that fetishises security and threats?
The wake-up call for the mathematicians should serve as a reminder to social sciences. And be warned, metaphysicians, you'll be next!