Friday, July 10, 2009

WMD Typologies

Draft for Feedback

The words we use to classify weapons of mass destruction have done much harm to public consideration of emerging catastrophic risks.

What distinguishes good WMD mitigation policy is not whether the weapon is nuclear, biological or chemical. Furthermore, the current classification is worse than meaningless; it is dangerously misleading. For example, how often have policy analyses comingled a non-contagious biological weapon such as anthrax with a contagious BW such as smallpox, obscuring the unique strategic threats of each?

Bill Joy's seminal 2000 Wired essay "Why the Future Doesn't Need Us" contained the key to a policy-focused typology. Expanding on his "self-replication" distinction, WMD threats are more clearly understood when distinguished both between contained and self-replicating weapons and between selective and non-selective weapons, such as:

Contained Non-Selective (CNS)
Selective Contained (SC)
Self-Replicating Non-Selective (SRNS)
Selective Self-Replicating (SSR)
Lengthy Prodrome (SR-LP)
Gradually Incapacitating (SR-GI)

A contained WMD is one which affects a specific (knowable) area for a specific (knowable) time period. Examples include nuclear weapons, all existing chemical weapons and anthrax and other non-contagious biological weapons. The opposite of a contained WMD is a highly transmissive, contagious disease.

A contained non-selective WMD is one with which the potential deployer cannot effectively select who is killed or incapacitated. The best example of this is a nuclear weapon which kills or sickens everyone in a given area. But this selectivity is often not a simple, binary concept. For example, a specific nuclear weapon has a reasonably quantifiable blast/fire zone and a potential deployer might know that prepared people in a defined area outside that zone can prevent radiation illness with certain precautions (special equipment and/or iodine pills, etc.).

A contained selective WMD allows the potential deployer to effectively select who is killed or incapacitated. One example is a chemical weapon which disperses over a short time and for which selected persons could wear preventive respirators or filtering gas masks. Another example would be a genetically engineered anthrax strain which the deployer can effectively vaccinate against or treat but the intended victims cannot. Literary examples include Frank Herbert's novel 'White Plague' and the plagues of the book of Exodus.

A self-replicating WMD is one which is likely to expand beyond its original impact area through person-to-person transmission or some equivalent process. One example is the smallpox virus which is sufficiently contagious to likely cause widespread transmission beyond the original victims. Two documented historical examples are the black plague's origination as a Crimean War weapon and the British 1770's use of smallpox-infected blankets to decimate Native Americans allied with the French.

As above, a selective self-replicating WMD would allow the potential deployer to select which persons are killed or incapacitated by the self-replicating weapon. An example would be a contagious disease with a vaccine or treatment available to the perpetrator(s) but not to the target victims.

Two special cases are noteworthy because their tactical advantages might increase consideration of a 'preventive first strike' strategy. A long prodrome, the time between when a carrier is first contagious and when visible symptoms appear, especially if greater than eighteen months, could be highly destabilizing in that it could cause a potential deployer to believe (correctly or otherwise!) that they could infect virtually everyone before preventive biocontainment or therapy development had ever begun. Similarly, a gradually incapacitating WMD could present a similar perception that a deployer could cause (universal?) victim passivity or incapability before awareness of the attack, possibly precluding any effective response.

How realistic are these dangers? The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” which unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities cited were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral
agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. “Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Looking forward to the near future when tens of thousands (millions?) possess the above capabilities, what might traditional game theory suggest to a potential WMD developer about risks given such capabilities? If you had a contagious biological weapon with a long prodrome or a self-replicating gradual incapacitator yourself, might you not worry that one or more others could shortly develop and deploy such a weapon? Might someone (incorrectly?) conclude that a preventive first-strike deployment was the less risky strategy in such an unending game of ten thousand (million?) prisoners' dilemma?

This is a working draft and your feedback is sincerely appreciated. Especially appreciated are typology improvements and suggestions of where my ideas need clarification.

As noted in posts below, my primary question is whether emerging catastrophic threats and resulting public panic might lead to greatly diminished human rights. Policing biotechnology may not be possible in our 2009 but could someone not then use BW fear to justify Orwell's 1984? What is the range of likelihoods that this issue will singularly define our children's future? Is a 0.5% risk one which we are entitled to ignore at their peril? Who has honestly considered this scenario and still believes that the risk is below 10%?

Monday, June 22, 2009

Biosecurity and Bruce Ivins

Even if Bruce Ivins was not guilty of the 2001 Anthrax mailings, which is itself a considerable leap of faith, there is still much that is undisputed fact about the Bruce Ivins story as a measure of current biosecurity failings.

It is undisputed that Buce Ivins had USAMARID authority to work alone with select agents such as anthrax. It is also undisputed that Dr. Ivins had sought psychiatric care and that at least one respected professional colleague had filed a criminal complaint about his behavior. It is also undisputed that the FBI had all of this information and yet no action was taken. Didn't USAMARID also have all of the above information?

There are reportedly now 14,000 scientists approved to work with select agents in this country alone. The research sponsor with the greatest resources and incentive for maintaining strong biosecurity was USAMARID, the same sponsor who controlled Dr. Ivins access! How bad, then , is the biosecurity at the least biosecure sponsor?

We know that a properly enforced two-man rule would likely have prevented Dr. Ivins from misusing pathogens in his lab (whether he did or not). But the two-man rule is opposed as costly. Is it unnecessary or just inconvenient?

As biotechnology continues to develop, even more dangerous pathogens will emerge, either through discovery in nature or from genetic engineering. Must we wait until it's too late to propose appropriate biosecurity standards for these inevitable circumstances? As Oxford's Dr. Nick Bostrom observed, trial and error is not an effective approach to existential risks. As a very concerned parent, I sometimes allow myself sarcastic license.

What might a BL-5 biosecurity standard require? I suggest a "five-eye rule" whereby all work with certain pathogens, such as smallpox or a future highly contagious, fatal disease but not all select agents, be performed by two unaffiliated researchers within a lab with 100% videomonitoring by a trained remote observer not affiliated with either researcher. Decontamination egress would be controlled by the remote observer. To reduce costs, the lab observer and the remote observer could be trained in biosecurity but without any post-graduate education.

Ideally, the remote observer would be from a different nation than the research but that level of oversight might need to be part of a 'second round' of reform.

What am I missing? Is the risk tuly remote? If there were a 0.1%/decade risk of a global pandemic, what would be the dollar value of lowering that risk? Given the global social, economic and political implications of a bio-error pandemic, and the resulting fear of another, it must be in the tens of billions. Is it in the trillions?

Your thoughts are welcome. Opposing views are especially appreciated.

Monday, May 18, 2009

Super-Empowering Technologies and Totalitarian Risks

What are our ethical responsibilities for minimizing global catastrophic risks if they could plausibly forever end personal privacy, open inquiry and other cherished human rights?

Many world-class scientists and technologists are sharing their concerns regarding global catastrophic risks from biotechnology and other emerging (genetic, robotic, nanotech, neuroscience and information) technologies.

For example, Sir Martin Rees, President of the Royal Society since 2005, predicted in 2002 that “By 2020, bioterror or bioerror will lead to one million casualties in a single event,” asserting that: “Biotechnology is plainly advancing rapidly, and by 2020 there will be thousands-even millions-of people with the capability to cause a catastrophic biological disaster. My concern is not only organized terrorist groups, but individual weirdoes with the mindset of the people who now design computer viruses. Even if all nations impose effective regulations on potentially dangerous technologies, the chance of an active enforcement seems to me as small as in the case of the drug laws.”

The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” and it unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. "Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Oxford University’s Future of Humanity Institute surveyed all participants who had just attended their 2008 Global Catastrophic Risks conference and discovered that over half of responding participants believed that there was at least a 30% probability that a single “engineered pandemic” would kill “at least one million...before 2100”. Over half also projected at least a 10% probability that such a single attack would kill “at least one billion”.

The (MIT) Technology Review 2007 article “The Knowledge” stated " there is growing scientific consensus that biotechnology- especially the technology to synthesize ever larger DNA sequences- has advanced to the point that terrorists and rogue states could engineer dangerous novel pathogens."

How might such a tragedy occur? We need only consider the cases of Unabomber Ted Kaczinski, Atlanta Olympics bomber Eric Rudolph, Oklahoma City bomber Tim McVeigh or France's AZF extortionists. Consider the warped, multi-year persistence of each. And each 'succeeded', often repeatedly, while remaining entirely unidentified until later.

As awful as such a man-made pandemic would be by itself, how might it raise the danger of (self-sustaining/permanent?) totalitarianism?

How would survivors respond to such a man-made pandemic? If the source of the disease was unknown, or identified but not locatable, the panic of uncertain infection and likely interim lawlessness would be supplemented by fear that the unknown or unlocated bioweaponeer(s) would strike again with an even more virulent pandemic. Rumors would almost certainly circulate that a second pathogen with a longer prodrome had already spread, adding to fears of an unknown, unavoidable death. Even if the pandemic’s source was killed or otherwise neutralized, the realization would be that, as Sir Martin Rees has said, “millions” of others were also able to cause equal or greater carnage and mayhem at any time.

How much might this change the priorities of political elites and electoral majorities? To what extent would privacy and political liberty be surrendered in search of safety?

Consider the many historical examples, such as American support for the 1942 internment of Japanese-Americans or the Bush administration interrogation and wiretapping practices after 9/11. The issue here is not the actions taken but rather with the breadth of prolonged popular support they received. This is not a uniquely American experience, as history provides examples from most nations. Most important, fears of recurring man-made pandemics would vastly exceed any of the anxieties from those earlier cases.

Clearly the above scenario questions only one of many dangers from emerging technologies. Nick Bostrom, John Gray, Bill Joy Sir Martin Rees, John Robb and others have offered many other plausible scenarios of extinction, environmental catastrophe, lost human values and other disasters. My question isn’t whether biotechnology presents the greatest risk, but rather whether any super-empowering technology risk warrants preventive actions and/or mitigation preparations now.

What are our responsibilities to take preventive actions or prepare for mitigation of the consequences? Nick Bostrom and others have offered much about valuing the trade-offs in utilitarian terms. But one unknowable warrants special emphasis. If there is an absolute ethical truth, whether a penultimate purpose for humankind or even a deity, and if it is, or could become, knowable to a future generation, future life applying that knowledge could be qualitatively, ethically 'better' than life as we know it.

Highly recommended for further inquiry:

  • Global Catastrophic Risks (ed. by Nick Bostrom & M Cirkovic).
  • Our Final Hour (by Sir Martin Rees).
  • Why the Future Doesn’t Need Us (by Bill Joy).
  • Brave New War (by John Robb).
  • Heresies, Against Progress and Other Illusions (by John Gray)

Both contrary arguments and improvements to the clarification of my question are sincerely appreciated.

Thursday, May 14, 2009

2009 H1N1 Prospects

See John Robb's new post at http://globalguerrillas.typepad.com/johnrobb/2009/05/chance-to-mutate.html. He raises an important question about swine flu mutation risks. This question is also being raised by emminent epidemologists and virologists.

John is the author of the outstanding NYT bestseller, Brave New War, describing how super-empowering technologies will change geopolitics and global security. Buy this book.

As an example of what John's new post questions, the 1918 flu was often called "the three day flu" because it was so consistently mild during its first pass.

Regarding John's question, how might I find the frequency with which past new flu strains became more lethal as they mutated? Both the WHO & CDC have emphasized this danger but neither has quantified the recorded historical frequency despite having data on (dozens?) of past new strains.

Another interesting historical question, one which might yet have immense policy significance: if, as the WSJ recently reported, the best science then suggested that the 1970's swine flu virus had "only" (sic) a 2-20% chance of becoming a global pandemic, wasn't President Ford's much maligned aggressive response a most important best choice given the information then available?

Thursday, April 2, 2009

Inalienable Reality

Sustainable rights will reflect inalienable reality. Regardless of whether they should, they inevitably will.

Schoolchildren studying basic anthropology learn that sustainable social norms and institutions must be consistent with the culture’s values, environment and technology. Either they are consistent, they adapt or 'they' perish.

As technological change accelerates in often unexpected new directions, empowering nations, non-state players and even individuals with unprecedented genetic, robotic, infotech and nanotech capabilities, at some point we imperil those children as we ignore that basic, truly inalienable, reality.

This fact must eventually change the sustainability of personal privacy and liberty as we now know and cherish them. The question is not whether this argument sounds like a Luddite view which we dislike; our children's futures depend on whether this constraint is ever an objective reality and whether we are now approaching that point in time.

As political theorist John Gray wrote in 2002, "The development and spread of new weapons of mass destruction destruction is a side effect of the growth of knowledge interacting with primordial human needs... It will occur haphazardly, as part of competition and conflict among states, business corporations and criminal networks."

I would add that in our new world of globally disseminated technologies, it will not be sufficient that people abstain from super-empowered violence unless we globally, universally abstain. In a world with super-empowered individuals, the terms would be 'unanimity or catastrophe'. As understatement: homo rapacious has not yet done unanimity well.

The remaining question becomes one of urgency. Is this a 21st century danger when it has never yet been a problem before? As to just the biotechnology risks, consider this unanimous 2004 assessment sponsored by The National Academies of Science: “These categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The NAS reports' seven capabilities are to:
  1. “render a vaccine ineffective."
  2. “confer resistance to therapeutically useful antibiotics or antiviral agents”.
  3. enhance the virulence of a pathogen or render a pathogen virulent”.
  4. “increase transmissibility of a pathogen.”
  5. "alter the host range of a pathogen.”
  6. “enable the evasion of diagnostic/detection modalities.”
  7. “enable the weaponization of a biological agent or toxin."

What would this mean? Would thousands of individuals and small groups soon become capable of anonymously releasing a universally drug resistant tuberculosis, a highly contagious 'bird' flu or a genetically selective plague? Is the National Academies of Science correct that this can be accomplished "with existing knowledge and technologies or with advances...in the near future”?

Who would do such a thing? Surely this world of six to eight billion people might eventually include one or more microbiologically-capable Ted Kaczynski, Tim McVeigh or Eric Rudolph? Is AZF building a microbiology lab where they may unleash what Royal Society President Sir Martin Rees aptly calls a "bioerror"?

For that matter, might Big Pharma or aspiring little pharma or a rogue state bioweapons lab themselves make a contagious, lethal bioerror? Is it possible that a US biodefense lab, whether private, academic or USAMARID, might (again?) employ a scientist who would release a deadly pathogen? Might it be a contagious disease next time?

If so, what does history, especially this last decade, indicate about how endangered political elites and frightened voters might react? Regarding our political rights, how low would they go?

What am I overlooking or oversimplifying? Are there technical limitations which the NAS Special Committee has underestimated? Are existing regulatory controls sufficient? Can we assume that the people with this new knowledge are universally committed to preventing misuse?

Alternatively, what can be done to build political will for addressing this issue?

Sunday, March 29, 2009

The Technium Discussion

Kevin Kelly has a good, new discussion about technology risks at http://www.kk.org/thetechnium/archives/2009/03/reasons_to_dimi.php.

His post lists four arguments against technology, which he summarizes as: "contrary to nature...(contrary to) humanity..., (contrary to) technology itself "and "because it is a type of evil and thus is contrary to God".

He summarizes the 'contrary to technology itself 'argument as:

"Technology proceeds so fast it is going to self-destruct. It is no longer regulated by nature, or humans, and cannot control itself. Self-replicating technologies such as robotics, nanotech, genetic engineering are self-accelerating at such a rate that they can veer off in unexpected, unmanageable directions at any moment. The Fermi Paradox suggests that none, or very few civilizations, escape the self-destroying capacity of technology."
That statement immediately reminds me of Bill Joy's Why the Future Doesn't Need Us, John Robb's Brave New War and John Gray's Straw Dogs. As John Gray wrote in 2002: "The development and spread of new weapons of mass destruction is a side effect of the groculdwth of knowledge interacting with primordial human needs. That is why, finally, it is unstoppable. The same is true of genetic engineering...It will occur haphazardly, as part of competition and conflict among states, business corporations and criminal networks”. And so it may well "veer off in unexpected, unmanageable directions", as Kevin paraphrased. But as the AIG risk manager undoubtedly once added, 'what's the worst that realistically could happen'.

Much of the discussion is strongly supportive of technology so his post offers viewpoints contrary to my own. Unfortunately, much of the discussion focuses on inconveniences of emerging technologies when the urgent issues are existential risks and sustainability of our core liberties.