Wednesday, October 17, 2012

A Brave New World of War: Cyber War & Defense in Depth

First Published on Huffington Post October 16, 2012.
http://www.huffingtonpost.com/heather-roff/a-brave-new-world-of-war-_b_1968520.html




Last week the U.S. Secretary of Defense, Leon Panetta, warned of a possible cyber "Pearl Harbor" attack on the U.S. He called attention to a new battle space: cyberspace.
This speech appeared to have several targets and we can draw several conclusions from it. First, and easiest to discern, is that Panetta is rousing the U.S. Congress to take concrete action and pass into law rules and regulations governing the sharing of information between private enterprises and the government. Many might recall the protests this last spring over the Cyber Intelligence Sharing and Protection Act (CIPSA), and this clarion call from Panetta appears to be harkening back to these same issues. Indeed, this is probably why he explicitly notes that the President is likely to issue an Executive Order should Congress fail to act.
The second target is the American, and perhaps international, audience. With much speculation about the U.S.' potential cyber threat and its response capability, it is high time someone higher up actually address it. While the White House has most certainly put forth documentation regarding its position regarding cyber security, little from the defense community has been forthcoming. Panetta's speech, therefore, unveiled many more specifics than the U.S.' International Cyber Security Strategy, which for the most part aims at such lofty goals as providing for the free flow of information while simultaneously ensuring security of networks.
The final, and to me the largest, target is the potential cyber adversary. Since much pertaining to cyber capability and warfare is classified, the decision for Panetta to show the U.S.' hand is telling. Allow me to explain. Much ink has been spilt over the "attribution problem." This problem states that cyber attacks are very difficult to trace with absolute certainty, and so attributing responsibility to one or more parties is more of a guessing game than anything. Because the issue of attribution calls into question whether we can know with 100% certainty whether an attack came from, say Russia, China, Iran, Lichtenstein, or the Moon, any attempt to either retaliate in self-defense or punish for deterrent effects will be problematic at best. What if we picked the wrong state? What if the cyber-warriors were so talented that they made it appear that it was China attacking and really it was Botswana? We might end up attacking an innocent third party, thereby becoming an aggressor ourselves. But Panetta's speech clears away the uncertainty surrounding the attribution problem. He stated that the "United States has the capacity to locate [the aggressors] and to hold them accountable for their actions." Wow. That is some serious stuff.
What it means is that the U.S. has very good cyber forensic capabilities and that it has probably procured enough consensus from private internet providers to share critical information regarding cyber attacks. What this also means is that the U.S. will not only know who attacked it, but it will use any means it sees fit to either preempt the attack or act to deter potential attackers in the future. That means both cyber and traditional (or sometimes called 'kinetic') warfare is on the table. Most telling still is that the U.S. has marked out three areas where it will act if provoked or attacked: the nation, the national interest, and allies.
Acting to defend the nation is rather unsurprising. Acting to defend national interest(s) is also, given U.S. military and foreign policy history, unsurprising. What does seem surprising, though, is the bit about the allies. The potential here is that if a North Atlantic Treaty Organization (NATO) ally is attacked by a cyber weapon, then the U.S. might retaliate with either cyber or traditional weapons on the ally's behalf. This statement appears to contradict, or at least militate against, earlier NATO findings about cyber attacks against Estonia in 2007.
All in all, Panetta's statement is a clear warning: cyber war is here and the U.S. is prepared to enter the fray with whatever means necessary. The questions for us, now, are what should we do about it? Certainly public rules of engagement should be made available, but more than that, transparency in the policy and governance processes is also a must. It is a must because the greatest weapon a cyber warrior has is a weakness in computer code. If there is no weakness, then there can be no attack. If we make cyber security a common good -- governed by the commons -- than we have more minds at work to secure networks, and this can only be done outside of the shadows.

Friday, September 28, 2012

The DoD's New Moral Code for Autonomous Weapons



First published on the Huffington Post:  
http://www.huffingtonpost.ca/heather-roff/the-dods-new-moral-code-f_b_1910608.html 

Recently, the United States Department of Defense issued a report on increased autonomy in DoD weapons systems to understand what role, problems and benefits will come with the expanded use of self-directed weapons.
We are all familiar with the U.S.'s reliance on "drones" for surveillance and reconnaissance missions, as well as their use in targeting and killing suspected terrorists in countries like Yemen, Pakistan and Afghanistan. What is not typically noted is that the current autonomous weapons systems do not present any new legal or ethical problems.
Distanced killing or surveillance is functionally no different than sending a Tomahawk missile from an aircraft carrier or snooping from satellites in space. Questions of how they are used to kill American citizens abroad, or suspected terrorists in another country's borders are, of course, a separate matter. This most recent report, however, is not about the current technology, but the proposed trajectory for automation and the DoD's attempts to assuage the fears of those of us following its course. 

Unsurprisingly, the DoD wants to enlarge the U.S. military's reliance on autonomous (i.e. self-directed) weapons in conflict, to advance the level of autonomous action capabilities of existing weapons and to create new autonomous systems. What is surprising is that the DoD realizes that the public and the weapons operators are uncomfortable with the goals of increasing autonomy.
So its new tactic is to shift the terms of the debate. It now claims that traditional definitions of autonomy as "self-directed" are "unhelpful," and that "autonomy is better understood as a capability (or set of capabilities) that enables the larger human-machine system to accomplish a given mission." What the DoD is doing is changing the discussion of increased autonomy of weapons to the "mission" and the "mission autonomy" (whatever that means). Previous attempts by various service branches to roadmap future levels of autonomy in weapons systems is, according to this new report, "counter-productive," as it only heightens the Terminator-style fears.
Even further still, though, the DoD claims that:
"casting the goal as creating sophisticated functions (i.e. more self-directedness) -- rather than creating a joint human-machine cognitive system -- reinforces the fears of unbounded autonomy and does not prepare commanders to factor their understanding of unmanned vehicle use that there exist no fully autonomous systems, just as there are no fully autonomous soldiers, sailors, airmen or Marines."

This position presents a nice little loophole with which to stop debate about increased autonomy in weapons systems. The critic says, "we worry about attributing responsibility to a weapon that decides to fire on a target by itself." The DoD responds "there is a human-machine cognitive system, and so don't worry, there is a human there!" But the question remains: where? How far removed is this person? The commander? The General? The President?
Moreover, as the above quote illustrates, this semantic slight of hand blurs the lines of moral and legal responsibility for killing in war, given that the DoD believes that no soldiers, sailors, airmen or Marines are fully autonomous. This only makes sense, if we work from a definition where the mission is the primary focus and that autonomy is defined purely in terms of the "capability" of fulfilling said mission.
Yet this is not what is usually meant by autonomy in everyday or philosophical use, nor how millennia of moral and legal systems have taken it to mean. Traditionally, we think of soldiers, sailors, airmen and Marines as autonomous because they are persons. That is, they have the capability for self-directed action. When they use this capability to choose their own course of action, and say break the laws of war, we hold them accountable for their actions (legally as well as morally).
The idea that these persons are not fully autonomous, says first that they cannot be held fully accountable. But second, it implies the systems that the DoD wants to exploit are also (if we read between the lines) incapable of responsibility attribution. We are not concerned with the system, or even the software designer or the commander; we are concerned with the "mission." A mission is not a person, it is a thing, and things cannot be held morally responsible. It is like saying that you want to hold your car responsible for breaking down on the way to work. You wouldn't say that your car "wronged" you, and you wouldn't seek to punish your car. 

The result of all of this is that the DoD is attempting to side-step questions of morality and responsibility. It does not appear to endorse the programming of weapons with "ethical governors," that is rules that would prohibit these weapons from, say, targeting a civilian. Rather, it is endeavouring to redefine the notion of autonomy, and this confuses an already convoluted topic.
Case in point, the report further states:
"Treating unmanned systems as if they had sufficient independent agency to reason about morality distracts from designing appropriate rules of engagement and ensuring operational morality. Operational morality is concerned with the professional ethics in design, deployment and handling of robots. Many companies and program managers appear to treat autonomy as exempt from operational responsibilities."

Are we concerned with weapons obeying the laws of war (and morality) as we traditionally think of it, or are we concerned with software designers upholding a (rather nonexistent) professional ethics in design? By the by, such a professional ethics would basically amount to the software designer taking precautions against knowingly designing or fielding a product that would cause harm.
Now, these weapons are designed to harm, but the type of harm to be avoided would be negligent harm. Such a position on the ethics of autonomous systems not only reduces any questions of morality or responsibility to tort law and issues of liability, but it has the potential to divorce the idea of morality from the discussion. For instance, we might say that there is a professional ethics amongst a band of thieves, but we would not say that the activities of band of thieves are moral. To claim that the DoD, and thus the U.S. military, should focus on "operational" responsibility is like claiming that the band of thieves ought to focus on not ratting each other out.
Of course, we could be charitable to those inside the Beltway and claim that the DoD is sensitive to issues of ethics, and that by claiming that operational morality is important addresses the point. Those in charge of design, deployment and handling of robots are the ones who must act ethically, and who will be held accountable. But this just kicks the can again. It puts us back to our original question of who is actually responsible, how far removed that person is from the deployment of weapons that have the potential of making their own targeting decisions. This is so because, if we take the DoD at its word that not even persons are fully autonomous, then we are again back to the problem of definition and whether anyone can ever be held responsible for the use (or abuse) of these weapons.
Ultimately it appears that the DoD is not only going to try to exploit every opportunity to use unmanned systems, but it is also implicitly skirting the legal and moral questions raised by the deployment of such weapons by redefining what "autonomy" actually means and relying on "codes" of ethics that are not what we traditionally think of as ethical. It amounts to political prestidigitation and the DoD as rewriting ethical code on more than one level.
*Photo of "BigDog" uploaded from Wikipedia

Friday, September 7, 2012

Who is Responsible for Syrian Refugees




http://www.huffingtonpost.ca/heather-roff/syria-refugee_b_1850425.html


This post first appeared on Huffington Post:


Recently, there has been much discussion about establishing a "safe haven" within Syria's borders to protect the growing number of refugees fleeing the country's civil war. In fact, Turkey recently pleaded before the U.N. Security Council to support such a move; unfortunately it received little backing.
Even most Western powers were cautious, citing"considerable difficulties" with any such plan. Yet the sad fact remains that Turkey and other neighbouring countries are shouldering a heavy burden. Already 80,000 refugees have poured into Turkey's refugee camps, with an estimated 4,000 arriving daily and 10,000 more still waiting along the frontier. The question becomes, though, what happens when the neighboring countries reach an unsustainable capacity?
Turkey claims that it can only handle 100,000 total refugees, while neighboring Jordan has estimated that 81,000 refugees have already crossed into its borders. Can we hold that these states have a duty to accept more fleeing Syrians? This is a tough call, as the international community is not helping the situation in any certain terms.
If we look to, say, the philosophy of Immanuel Kant, and his arguments about the necessary requirements for peaceful relations amongst states, we see that one of the prerequisites for such peace is what he terms "a universal right of hospitality." What does this mean? Well, generally it means that all persons have a right to visit various countries and associate with other people. But Kant's caveat is this: you cannot turn a person away if it means his certain destruction. In other words, refugees that face death in their own country have a right -- a moral right -- to go elsewhere.
It is not clear whether Turkey, Jordan, Lebanon and Iraq have fulfilled their duties by allowing the Syrian people refuge within their borders. What does seem clear, at least in the moral term, is that the international community is manifestly failing in fulfilling its duty to uphold the Syrian people's universal right of hospitality. The U.N. Security Council's continued obstinacy in undertaking any concrete action only further erodes the moral, as well as the very weak legal, rights that the Syrian people have.
But the Security Council is not the only obstacle to protecting the Syrian people. The new UN envoy to Syria, Lakhdar Brahimi, has now publicly stated that military "interference" is not an "available option." We might read Brahimi's statement one of two ways: either that he would not endorse a military intervention or that the Security Council will never pass a resolution authorizing intervention. I tend to believe he intended it the first way, and if that is the case, this presents further problems for protecting the Syrian people. Either way, though, Kofi Annan's successor is reifying the UN's position as an impotent international organization.
Yet the UN is not the only problem. We have another -- the continued reticence of many liberal politicians and pundits to do much more than wag their fingers at Assad. I myself have written that intervention in Syria would not happen the way it did in Libya, but that is not to say that something shouldn't be done. Many are too quick to dismiss enforcing no fly zones or creating safe havens, claiming that "humanitarian pretexts" cannot hide what amount to ineffectual power plays.
Undoubtedly no-fly zones and safe havens require military power, boots on the ground and sorties in the sky. The question is not whether military might is required, but when or how to deploy it. If we are going to claim that people have human rights, and that the international community is governed by norms, rules or laws, then those laws and rights must have the correlative enforcement mechanism to ensure that they are upheld. Without it, the international legal regime is nothing more than a phantom, and the politicians and pundits who vacillate on the enforcement of such rights perpetuate the illusion of international law and morality.
Until we recognize that "a community widely prevails among the Earth's peoples, [and] a transgression of the rights in one place in the world is felt everywhere" international law and what Kant terms "cosmopolitan right" is merely "fantastic and exaggerated."