Wednesday, November 3, 2010

Good Read on Future Risks, with Practical Actions

The Future of Man- Extinction or Glory?
by Peter Hollings

This is an original, thought-provoking book on the topic of protecting our children’s future in a world of new dangers. Like Sir Martin Rees’ Our Final Hour, Hollings places our risks of technologically-enhanced self-destruction in both cosmological and evolutionary contexts. He also reminds us how misery is not an abstraction; that real injustices cause real people pain. But The Future of Man is unique in proceeding beyond the problems of war, WMD terrorism and environmental self-destruction to presenting specific, actionable steps we individually can take now.

I especially value Hollings observations about the importance of true spirituality in preserving our world. He explains how traditional religion evolved with man’s basic need for context and meaning, needs which remain. Although science has disproven many of the original beliefs of traditional “sacred scripture” religions, as he notes, scientific evidence now also shows our universe is disproportionately well suited to support life. Viewing this as evidence of willful creation, Hollings challenges the reader to become a ‘Future Man’. Real spirituality, as Hollings quotes from Albert Einstein, comes from “widening our circle of compassion to embrace all…” To do this, Hollings calls us “…to give meaning to all that has happened and will happen”, to assume responsibility for finding our God-given purpose. And, as Hollings well explains in a chapter so named, “we must act now’.

This book also presents a Deist’s argument for accepting God as omnipotent creator but rejecting the “sacred scripture” religions whose past intolerances often contributed to human self-destruction. As a Unitarian, believing that although all religious organizations are flawed we should strive to accept others’ faiths as common opportunities to seek truth and meaning, I found this argument not convincing but moderated and instructive nonetheless. In short, even those who disagree with the author about traditional religions will find Hollings larger argument well proven anyway.

The book is clearly written and the author’s compelling arguments are well supported. I highly recommend it.

To purchase, or for more information, go to http://www.amazon.com/ or http://www.peterhollings.com/.

Monday, October 25, 2010

WMD Risks and Consequence Mitigation (Under-implemented Singleton)

Michael Anisimov has another good post about the accelerating risk of genetically enginered bioviolece at
http://www.acceleratingfuture.com/michael/blog/2010/08/wsj-gains-in-bioscience-cause-terror-fears/.

In considering our children’s future with genetically engineered bioviolence, I would add Dr. Dexler’s famous 2007 comment that “…Advancing technologies will eventually make it easy to suppress terrorism. The great struggle will be to keep this power from suppressing too much more.” Sir Martin Rees voiced the same concern in his 2006 Edge interview, even calling it his “greatest concern”. See also Nick Bostrom’s similar concern within his excellent ‘What is a Singleton’ article at his website.

Another excellent website on this topic is Peter Hollings at http://www.peterhollings.com/. Peter's book, available at his website or http://www.amazon.com/, is also highly recommended.

If a technologically- enhanced, single regulatory authority is a possible (or likely) consequence of emerging technologies and super-empowered individual violence, what is the more likely outcome? For example, what does the history of man’s implementation of complex projects tell us about the likelihood of under-planning, under-testing, erronous implementation and other unintended consequences from a hastily implemented Singleton? As Nick Bostrom himself wrote, trial and error is not an effective approach to existential risks.

Is it possible that mass bioviolence is now otherwise unpreventable and that our best actionable focus is to mitigate the risks of an under-implemented Singleton? Is this issue even being discussed?

Sunday, September 26, 2010

Deism or Agnosticism?

Following is an inquiry, a reqest for help. It is certainly not intended as an affront to anyone else's beliefs. As context, I sincerely offer this from the assumption that my reasoning is flawed-

I am myself an Agnostic. I am overwhelmed by the evidence that we cannot now know how much we don't yet know. I do not believe we today have nearly enough information to conclude that God, in all forms, including some we may not yet envision, is impossible (aka Hard Atheism). Similarly, I do not see how our current knowledge provides any support, let alone adequate proof, for any 'traditional' concept of God.

As an Agnostic, I struggle with the possibility that we don't know whether human life has any meaning. As the father of four, and as a 56 year old, I sometimes find this struggle troubling, almost debilitating.

My request is for help to challenge an analogy, comparing my Agnosticism to a hypothetical parent of a seriously ill child, an analogy which seems to suggest Deism.

  • Like the Parent, I find my possibility of pointlessness to be such a troubling malady, or undesired condition, that I desperately seek a favorable solution.
  • This Parent doesn't know whether the child can be cured, much as I don't know whether our lives have meanings transcending our mortality. In this very narrow sense, we are both agnostic.
  • Like the parent, I have a strong preference for the outcome. I want to find that there is meaning to Humanity and our actions, for my own children's sake and for myself.
  • As a parent, I believe with certainty that the hypothetical parent must act and must choose his/her actions based on the assumption that a cure is possible. Those choices may be constrained by opportunity costs, such as treatment A now may preclude Treatment B later or Treatment C may cause the patient side effects or add new patient risks, but the choices would still derive from that one assumption that a cure is possible.
  • As an Agnostic, and IF I assume no oportunity costs to acting based on an assumption that there is a 'higher force' willing a 'transcending absolute purpose for Humanity's actions', how is it invalid to apply the parent's logic, basing my choices of actions, and not of beliefs, on the assumptions of Deism?

I have emphasized the bolded assumption above in recognition that one effective counterargument would be to provide an example of such an opportunity cost. My own limited such efforts have found only negative opportunity costs, benefits. In that limited sense, this argument is structured akin to Pascal's Wager. An irony, of course, is that if the above logic was correct, and one acted accordingly, (s)he would accrue both the benefits of purposefulness in this life and the possible (infinite &/or eternal?) benefits of the as yet unconfirmable afterlife.

I realize that my conclusion assumes my personal definition of Deism. Please address the concept, a higher force willing a transcending absolute purpose..., even if you would not term that belief Deism.

Thank you for any assistance you can provide.

Wednesday, February 17, 2010

Anihilism and Open Society

The following argument is a syllogism, proposing that a comprehensive basis for ethical choices can be derived from what we today don't know:

There is so much we don’t yet know. There are such fundamental questions and incomplete answers in physics, cosmology, neuroscience, psychology, etc. Consider the extent to which the scientific discoveries of the last decades were unexpected and that former scientific certainties are now verifiably incorrect. We must reasonably assume that we cannot now know how much we don’t know.

Second, given this evidence of ignorance, it is simply prudent to assume for now that there might be a knowable and actionable higher purpose for humanity. Not because we know there is one, but because we cannot possibly now have enough understanding to definitively presume otherwise. For example if my dog, or an ant, or we ourselves for that matter, can’t fully understand human consciousness, how can we today presume that we so fully understand all possible paradigms of theism as to reasonably assume them all disproven?

And humanity’s purposor(s) certainly need not be omnipotent, omniscient and eternal, like in the sacred texts of old. It/they need only be more sapient than we are currently. Given the level at which new understanding has recently surprised us, who would today consider that such a high standard? And they need not have literally created our higher purpose; they may instead simply add enough to our knowledge that we can understand and supportively act upon its true nature.

If we don’t know how many billions of solar systems there are, how do we have any certainty about the probability of more technologically advanced interstellar travelers? If we can’t explain replicated physics observations without ‘stringing’ multiple universes, how do we have any certainty that we accurately perceive a unitary WYSIWYG world?

Just one example of currently plausible ‘Godless’ more knowledgeable life forms is Nick Bostrom’s Philosophical Quarterly essay ‘Are You Living in a Computer Simulation?’, which I resummarize as:

IF human (or ANY other sapient being) technology is capable of eventually creating computer simulations (OR other complex perceptions) which appear to be as real as our current experiences,

And IF there is no PERSISTENT (essentially 100%) obstacle to human (OR other self-aware,
technological) beings surviving until that technology is achieved,

Then EITHER those controlling use of that technology would almost never (essentially nil %)
choose to create such artificial perceptions (WITH the parameters we now perceive),

OR the odds that we are not living in such an artificial perception are infinitely low.

For example, the continuing world we (I?) experience is, in terms of probability, one observation. If sufficiently advanced technological society(s) would choose to create ‘only’ one thousand (such defined) artificial perceptions (i.e. computer simulations), then the probability that we (I?) are NOT now experiencing one of those simulations as our reality is one tenth of one percent.

Furthermore, even if we are the only beings capable of self-awareness anywhere in the Universe, or alternatively if Deism is correct and all other such beings will forever choose to not interfere in our experience under any circumstances, so much is now unknown that we cannot yet know whether there is an actionable higher purpose which we can someday determine alone. It is even possible that such truth may already be known by some people and the rest of us are still unaware, or unconvinced, of what they have already discovered.

Third, If a knowable, actionable higher human purpose is truly plausible, this alone provides the only proper basis for all our choices. For the true (opportunity) cost (to the universe) of our having such a purpose and not finding it, or of unnecessary delay in finding and following it would be, quite literally, beyond our current comprehension. In the event that such a higher purpose doesn’t exist, life is fundamentally meaningless and the incremental cost of having looked for it in vain would be trivial (most likely negative, for reasons beyond the scope of this posting; if unsure, simply ask your mother to explain).

Regardless of the probabilities involved, which I argue are now fundamentally unknowable, the probability-adjusted present value of acquiring, and acting according to, that possible absolute human purpose(s) must exceed the value of any other outcome. This derives only from the presumption that a qualitative difference in the value of a more purposeful live might be infinitely better (not necessarily preferable, as we must assume we don’t currently know the basis for better living, but rather ethically better for humanity OR the Universe). Then simple arithmetic concludes this syllogism, which can be viewed as an expansion of Pascal’s Gambit.

In summary, Anihilism is basing ethical choices from belief that moral nihilism might be inaccurate. This one assumption calls us to specific actions, to aggressively search for, and then seek consensus to act from, the best knowable and actionable basis for human ethics.

The ethical implications of Anihilism, for a Humanist, a traditional Utilitarian, or even a Libertarian, and especially for an Agnostic or a 'Liberal Theist', are identical. Wouldn’t the only prudent choice be those actions which best promote the identification and broad evaluation of possible higher purposes (including search for external purposors)? If correct, this would include education for all who could benefit the search, open inquiry, leisure time for inquiry (antimaterialism), etc. Central to such application would be defense of open society, to preserve the free and open exchange of observations, ideas and questions.

But such an ethical system would not be without controversy. One of the more troubling possible ethical implications regards the issues of eugenics. Another example is issues of personal liberties, such as a perceived liberty to engage in sloth or to waste limited resources on hedonistic pursuits. As any absolutism, proactive anti-nihilism is not a risk-free, cost-free basis for ethical decisions.

But IF these three (bolded above) core assumptions are valid, what other basis for actions could be ethically valid?


Note- This post is a work in progress and your suggestions, questions, concerns are sincerely appreciated. I am intensely interested in feedback regarding both my core argument and ways to improve the clarity of my explanation.

Sunday, January 17, 2010

Dystopianist Ethics

Existential risks are not well resolved by trial and error, to paraphrase from Nick Bostrom. What then are we to do if we believe that ecological crisis, tyranny, or some other impending catastrophe is either likely or inevitable? Three core principles:

1. Consider that our fears are probably mistaken. From before biblical millenialists to yesterday's Unabomber Ted Kaczynski, Tim McVeigh and Aum Shinrikyu, history recounts much misery caused by dystopians who foresaw imminent disaster.

2. But we must also then consider how much is at stake if one's fears are correct. Truly existential risks compel us to consider the true costs of a 'false negative' error. How could one today reasonably value the permanent diminution of humanity's potential? Consequential ethics confronts a unique obstacle when evaluating distant future impacts. Accepting that we don't know how much we don't yet know, we cannot now assume that future generations might not attain knowledge which makes their lives qualitatively better than our own. They might conceivably 'find God', perhaps even literally. Or they may learn and accept an actionable higher purpose for humanity. Any 21st century consideration of acceptable existential risks must accept this possibility, today unquantifiable, and with it the unknowable likelihood that the value of future lives should be 'premiumed' rather than 'discounted'. Assume any probability to an infinite future value and the cost/benefit arithmetic is fundamentally simplfied (as in Pascal's Gambit).

3. Open inquiry best identifies fallacy and clarifies reality. Those who want to act based on reality must first “mine the network” to collect paradigms, questions, facts and validations. This isn’t exactly rocket science, but history repeatedly shows that most harmful radical action is based on erroneous assumptions, uncritically accepted. Open inquiry is the best preventive of such error, but it needs to be open-minded as well as open sourced. Here we must be our brother's keepers, challenging our facts and paradigms and also our receptivenesses.

This is a work in progress and your thoughts, corrections and clarifications are sincerely appreciated.

Saturday, December 5, 2009

Emerging Technologies and Tyranny Roulette

(Draft in Progress- Please see Below)

Our children could live happy, fulfilling lives even if we become a repressed society much like Orwell’s 1984. There would be sacrifices, but they might still be truly content. But that is not what any of us would want. Why accept that risk?

My argument is not that an Orwellian dystopia is inevitable, only that its likelihood during our children’s lifetime is at least 20%. Not 50/50; I propose merely one in five. Like Russian roulette and perhaps with only one bullet.

All it takes is fear. Tyrants offer security, albeit conditional and therefore false security. How many of us will accept their offer? Neither Hobbes nor Maslow can show us how we, and our neighbors, would respond to intense, unrelenting fear. And fear is possible.

Also, tyranny never arrives properly labeled. It almost always builds gradually, starting with "limited, temporary" powers.

Be forewarned: there is not a single original idea in this post. Not one. My concern of an oppressive dystopia is based on only four assertions which, like all my supporting examples, have been clearly presented by others. This is all old hat:

1. The technology needed to develop an untreatable contagious disease, or other catastrophic bioviolence, is expanding quickly and is also becoming available to “tens of thousands, perhaps millions”, of people. The knowledge, equipment and other resources needed are so
accessible that a small group, or even a lone individual, could accomplish all the necessary tasks.

2. Given this technology, widely available today and growing in both capability and accessibility, and countless examples of people's behavior in recent history, there is a significant (>50%)likelihood that bioviolence will kill over 10,000 people in the next thirty years. There is also a comparable risk that at least 1,000 of those victims will die from a single attack whose perpetrator(s) will remain anonymous or at least unapprehended.

3. If this happens, how would the frightened public respond? Such a level of dread and helplessness would be beyond any of our experiences. History has shown how quickly public opinion pluralities can swing, and how many pundits and politicians would arise to exploit those with heightened fears. Clearly, wouldn't there be at least a 40% probability that these public fears would lead to laws relinquishing privacy, open inquiry and free expression, and other liberties, in pursuit of perceived greater safety? Historical examples come from so many countries, and such recent decades.

4. Unlimited centralized power has proven most dangerous. Certainly not always, but often; and the more 'absolute' the power attained, the more likely the corruption of the empowered. Fear-driven relinquishment of privacy, open inquiry and free expression could easily evolve into a sustained tyranny. Historical examples abound. Furthermore, the surveillance technology to allow a government to sustain such oppression has now expanded dramatically and these capabilities, along with impending psychopharmacology and neuroscience capabilities, will grow much greater.

Sustained tyranny is itself an existential risk. In our childrens' lifetime it is the most likely existential risk. What steps should we, their parents, undertake?

Is the Technology Both Real and Accessible?

Several world-class scientists and other knowledge leaders have shared their concerns about emerging “super-empowering” technologies. Focusing on biotechnology, the first of the often cited genetic, robotic, nanotech and Infotech technologies of concern, examples include:

1. Sir Martin Rees has been President of the British Royal Society since 2005. Cosmologist Rees predicted in 2002 that “By 2020, bioterror or bioerror will lead to one million casualties in a single event,” asserting that: “Biotechnology is plainly advancing rapidly, and by 2020 there will be thousands-even millions-of people with the capability to cause a catastrophic biological disaster. My concern is not only organized terrorist groups, but individual weirdoes with the mindset of the people who now design computer viruses. Even if all nations impose effective regulations on potentially dangerous technologies, the chance of an active enforcement seems to me as small as in the case of the drug laws.”

2. The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” which unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities cited were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral
agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. “Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

3. Oxford University’s Future of Humanity Institute surveyed all participants who had just attended their 2008 Global Catastrophic Risks conference and found that over half of responding participants believed that there was at least a 30% probability that a single “engineered pandemic” would kill “at least one million...before 2100”. Over half also projected at least a 10% probability that such a single attack would kill “at least one billion” people.

4. The (MIT) Technology Review 2007 article “The Knowledge” stated "There is growing scientific consensus that biotechnology — especially, the technology to synthesize ever larger DNA sequences — has advanced to the point that terrorists and rogue states could engineer dangerous novel pathogens."

These concerns are also shared outside the scientific community. As called for by the 9/11 Commission, Congress established a commission on the “Prevention of WMD Proliferation and Terrorism”. The very first sentences of the commission’s December, 2008 report were “The Commission believes that unless the world community acts decisively and with great urgency, it is more likely than not that a weapon of mass destruction will be used in a terrorist attack somewhere in the world by the end of 2013. The Commission further believes that “terrorists are more likely to be able to obtain and use a biological weapon than a nuclear weapon.” Their report later added “as the anthrax letter attacks of autumn 2001 clearly demonstrated, even small-scale attacks of limited lethality can elicit a disproportionate amount of terror and social disruption”.

Would Someone Actually Use Such a Capability?

How will these new weapons be used? The key is who will decide.

As political theorist John Gray wrote in 2002, "The development and spread of new weapons of mass destruction is a side effect of the growth of knowledge interacting with primordial human needs... It will occur haphazardly, as part of competition and conflict among states, business corporations and criminal networks."

People don’t decide collectively, Wittingly or otherwise, they communicate cooperation to support what they perceive as their individual self-interest. These perceptions are consistently driven by fear or greed, or a self-deception founded in fear or greed. Where will fear and greed drive the newly super-empowered? More importantly, where will it lead the most dangerous of the many newly empowered?

Historical examples abound. How many years did Unabomber Ted Kazcynski devote to his bombs? What could he have developed in that time with tomorrow's (today's?) biotechnology? What could Oklahoma City bomber Tim McVeigh have developed if he spent his bomb research time applying open source genetic engineering? How many (dozens of) militant religious fundamentalist groups are preparing for their 'Christian', 'Islamic', "Hindu', 'Judaic' armegeddons by enhancing one of God's plagues? What if today's Charlie Manson collected his longing followers at UC-Berkley?

(Incomplete- text to follow later)

How would the Frightened Public, and Political Elites, React?

Many commentators have noted that technology once discovered cannot be relinquished. They are almost certainly correct in our 2009. Would they also be correct in Orwell's 1984?

How much would majorities of frightened voters sacrifice for greater security? How might the most opportunistic of politicians exploit voters' fears? Historically, how far have voters swung under much less scary circumstances?

Let's begin with the historical record...

Now consider how much worse the impending traumas might (>20% likelihood) be?


(Incomplete- ibid)

What Technology Really Exists for a Fear-Empowered Despot?




How might a tyranny establish and maintain itself if ever given the oportunity?


(Incomplete- ibid.)

Can't We Wait?

Time constraints are especially challenging because they are, in this case, essentially unknowable. It may be literally impossible to know when it would be too late to act except when it is already too late to act. The accelerating biotechnology research efforts will soon lead to unanticipated new capabilities. Without possibly knowing what these new capabilities will be, or even how often their discoveries will be presented outside the sponsoring organization, it is impossible to know how long an effective response would take to implement. What is now known is that any policy which can control bioweapon acquisition must now include essentially every country and substate participants including isolated individuals. Absent the most extreme solutions, it is difficult to see how such policies could be effectively implemented in decades let alone years.

It's Time to Act

Our children deserve specific precautionary steps and the time to act has come. As nations but also as universities, firms and individuals, we must think before we further spread WMD knowledge.

We must also prepare for a possible challenge to our cherished liberties. The best single defense of American liberties is not either an anti-ballistic missle system or the Second Ammendment. The best single (albeit no single defense is solely sufficient) defense is quality education, both in our schools and beyond schools. It is the 'middle third' in political involvement and often in political knowledge, who will control our children's fate. Their level of misinformation and gullibility is well documented.

Defending liberty will never safely rely on any one solution. Our children's liberty certainly deserves defense in depth. This requires thoughtful analysis of how an open society might mitigate against tyranny even if privacy were substantially curtailed. An example, although certainly not recommended as a completed, practical solution, is David Brin's The Transparent Society.

These issues also raise important questions about 'dystopian ethics'. What are we, as individuals or groups, morally obligated to do if we believe that tyranny, or some other impending catastrophe, is possible (or even inevitable)? The most obvious answer is that we should carefully consider that we are probably mistaken. From biblical millenialists to Unabomber Ted Kaczynski, Bruce Ivins and Aum Shinrikyu, others have done great harm because they foresaw world-changing disaster. Open inquiry is the best preventive of such error, but it needs to be open-minded as well as open sourced. Here we must be our brother's keepers, challenging our facts and paradigms and also our receptivenesses.

But truly existential risks compel us to also carefully consider the costs of a 'false negative' error. Accepting that we don't know how much we don't yet know, we cannot now assume that future generations might attain knowledge which makes their lives qualitatively better than our own. They might conceivably 'find God', either figuratively or quite literally. Any 21st century consideration of acceptable existential risks must accept this possibility and with it the unknowable likelihood that future lives and costs should be 'premiumed' rather than 'discounted'. Assume any probability to an infinite future value and the cost/benefit arithmetic is fundamentally simplfied.

Highly recommended for further inquiry:

Global Catastrophic Risks (ed. by Nick Bostrom & M Cirkovic)
Our Final Century (published as Our Final Hour in the US) by Sir Martin Rees
‘Why the Future Doesn’t Need Us’ by Bill Joy
Heresies/Progress and Other Illusions by John Gray
Brave New War by John Robb
‘Biowar for Dummies’ by Paul Boutin
‘The Knowledge’ by Mark Williams, in The (MIT) Technology Review, 3/2006

Draft (Plea)

This is a work in progress. I will be revising this draft as I complete the sections noted incomplete above. Your feedback would be sincerely appreciated. For example, am I incorrect that this danger is real? If this is a plausible (10%) risk to our children's future, how could we better promote thoughtful evaluation of prudent preventive steps?

Friday, July 10, 2009

WMD Typologies

Draft for Feedback

The words we use to classify weapons of mass destruction have done much harm to public consideration of emerging catastrophic risks.

What distinguishes good WMD mitigation policy is not whether the weapon is nuclear, biological or chemical. Furthermore, the current classification is worse than meaningless; it is dangerously misleading. For example, how often have policy analyses comingled a non-contagious biological weapon such as anthrax with a contagious BW such as smallpox, obscuring the unique strategic threats of each?

Bill Joy's seminal 2000 Wired essay "Why the Future Doesn't Need Us" contained the key to a policy-focused typology. Expanding on his "self-replication" distinction, WMD threats are more clearly understood when distinguished both between contained and self-replicating weapons and between selective and non-selective weapons, such as:

Contained Non-Selective (CNS)
Selective Contained (SC)
Self-Replicating Non-Selective (SRNS)
Selective Self-Replicating (SSR)
Lengthy Prodrome (SR-LP)
Gradually Incapacitating (SR-GI)

A contained WMD is one which affects a specific (knowable) area for a specific (knowable) time period. Examples include nuclear weapons, all existing chemical weapons and anthrax and other non-contagious biological weapons. The opposite of a contained WMD is a highly transmissive, contagious disease.

A contained non-selective WMD is one with which the potential deployer cannot effectively select who is killed or incapacitated. The best example of this is a nuclear weapon which kills or sickens everyone in a given area. But this selectivity is often not a simple, binary concept. For example, a specific nuclear weapon has a reasonably quantifiable blast/fire zone and a potential deployer might know that prepared people in a defined area outside that zone can prevent radiation illness with certain precautions (special equipment and/or iodine pills, etc.).

A contained selective WMD allows the potential deployer to effectively select who is killed or incapacitated. One example is a chemical weapon which disperses over a short time and for which selected persons could wear preventive respirators or filtering gas masks. Another example would be a genetically engineered anthrax strain which the deployer can effectively vaccinate against or treat but the intended victims cannot. Literary examples include Frank Herbert's novel 'White Plague' and the plagues of the book of Exodus.

A self-replicating WMD is one which is likely to expand beyond its original impact area through person-to-person transmission or some equivalent process. One example is the smallpox virus which is sufficiently contagious to likely cause widespread transmission beyond the original victims. Two documented historical examples are the black plague's origination as a Crimean War weapon and the British 1770's use of smallpox-infected blankets to decimate Native Americans allied with the French.

As above, a selective self-replicating WMD would allow the potential deployer to select which persons are killed or incapacitated by the self-replicating weapon. An example would be a contagious disease with a vaccine or treatment available to the perpetrator(s) but not to the target victims.

Two special cases are noteworthy because their tactical advantages might increase consideration of a 'preventive first strike' strategy. A long prodrome, the time between when a carrier is first contagious and when visible symptoms appear, especially if greater than eighteen months, could be highly destabilizing in that it could cause a potential deployer to believe (correctly or otherwise!) that they could infect virtually everyone before preventive biocontainment or therapy development had ever begun. Similarly, a gradually incapacitating WMD could present a similar perception that a deployer could cause (universal?) victim passivity or incapability before awareness of the attack, possibly precluding any effective response.

How realistic are these dangers? The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” which unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities cited were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral
agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. “Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Looking forward to the near future when tens of thousands (millions?) possess the above capabilities, what might traditional game theory suggest to a potential WMD developer about risks given such capabilities? If you had a contagious biological weapon with a long prodrome or a self-replicating gradual incapacitator yourself, might you not worry that one or more others could shortly develop and deploy such a weapon? Might someone (incorrectly?) conclude that a preventive first-strike deployment was the less risky strategy in such an unending game of ten thousand (million?) prisoners' dilemma?

This is a working draft and your feedback is sincerely appreciated. Especially appreciated are typology improvements and suggestions of where my ideas need clarification.

As noted in posts below, my primary question is whether emerging catastrophic threats and resulting public panic might lead to greatly diminished human rights. Policing biotechnology may not be possible in our 2009 but could someone not then use BW fear to justify Orwell's 1984? What is the range of likelihoods that this issue will singularly define our children's future? Is a 0.5% risk one which we are entitled to ignore at their peril? Who has honestly considered this scenario and still believes that the risk is below 10%?