Friday, July 10, 2009

WMD Typologies

Draft for Feedback

The words we use to classify weapons of mass destruction have done much harm to public consideration of emerging catastrophic risks.

What distinguishes good WMD mitigation policy is not whether the weapon is nuclear, biological or chemical. Furthermore, the current classification is worse than meaningless; it is dangerously misleading. For example, how often have policy analyses comingled a non-contagious biological weapon such as anthrax with a contagious BW such as smallpox, obscuring the unique strategic threats of each?

Bill Joy's seminal 2000 Wired essay "Why the Future Doesn't Need Us" contained the key to a policy-focused typology. Expanding on his "self-replication" distinction, WMD threats are more clearly understood when distinguished both between contained and self-replicating weapons and between selective and non-selective weapons, such as:

Contained Non-Selective (CNS)
Selective Contained (SC)
Self-Replicating Non-Selective (SRNS)
Selective Self-Replicating (SSR)
Lengthy Prodrome (SR-LP)
Gradually Incapacitating (SR-GI)

A contained WMD is one which affects a specific (knowable) area for a specific (knowable) time period. Examples include nuclear weapons, all existing chemical weapons and anthrax and other non-contagious biological weapons. The opposite of a contained WMD is a highly transmissive, contagious disease.

A contained non-selective WMD is one with which the potential deployer cannot effectively select who is killed or incapacitated. The best example of this is a nuclear weapon which kills or sickens everyone in a given area. But this selectivity is often not a simple, binary concept. For example, a specific nuclear weapon has a reasonably quantifiable blast/fire zone and a potential deployer might know that prepared people in a defined area outside that zone can prevent radiation illness with certain precautions (special equipment and/or iodine pills, etc.).

A contained selective WMD allows the potential deployer to effectively select who is killed or incapacitated. One example is a chemical weapon which disperses over a short time and for which selected persons could wear preventive respirators or filtering gas masks. Another example would be a genetically engineered anthrax strain which the deployer can effectively vaccinate against or treat but the intended victims cannot. Literary examples include Frank Herbert's novel 'White Plague' and the plagues of the book of Exodus.

A self-replicating WMD is one which is likely to expand beyond its original impact area through person-to-person transmission or some equivalent process. One example is the smallpox virus which is sufficiently contagious to likely cause widespread transmission beyond the original victims. Two documented historical examples are the black plague's origination as a Crimean War weapon and the British 1770's use of smallpox-infected blankets to decimate Native Americans allied with the French.

As above, a selective self-replicating WMD would allow the potential deployer to select which persons are killed or incapacitated by the self-replicating weapon. An example would be a contagious disease with a vaccine or treatment available to the perpetrator(s) but not to the target victims.

Two special cases are noteworthy because their tactical advantages might increase consideration of a 'preventive first strike' strategy. A long prodrome, the time between when a carrier is first contagious and when visible symptoms appear, especially if greater than eighteen months, could be highly destabilizing in that it could cause a potential deployer to believe (correctly or otherwise!) that they could infect virtually everyone before preventive biocontainment or therapy development had ever begun. Similarly, a gradually incapacitating WMD could present a similar perception that a deployer could cause (universal?) victim passivity or incapability before awareness of the attack, possibly precluding any effective response.

How realistic are these dangers? The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” which unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities cited were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. “Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Looking forward to the near future when tens of thousands (millions?) possess the above capabilities, what might traditional game theory suggest to a potential WMD developer about risks given such capabilities? If you had a contagious biological weapon with a long prodrome or a self-replicating gradual incapacitator yourself, might you not worry that one or more others could shortly develop and deploy such a weapon? Might someone (incorrectly?) conclude that a preventive first-strike deployment was the less risky strategy in such an unending game of ten thousand (million?) prisoners' dilemma?

This is a working draft and your feedback is sincerely appreciated. Especially appreciated are typology improvements and suggestions of where my ideas need clarification.

As noted in posts below, my primary question is whether emerging catastrophic threats and resulting public panic might lead to greatly diminished human rights. Policing biotechnology may not be possible in our 2009 but could someone not then use BW fear to justify Orwell's 1984? What is the range of likelihoods that this issue will singularly define our children's future? Is a 0.5% risk one which we are entitled to ignore at their peril? Who has honestly considered this scenario and still believes that the risk is below 10%?

No comments: