Monday, May 18, 2009

Super-Empowering Technologies and Totalitarian Risks

What are our ethical responsibilities for minimizing global catastrophic risks if they could plausibly forever end personal privacy, open inquiry and other cherished human rights?

Many world-class scientists and technologists are sharing their concerns regarding global catastrophic risks from biotechnology and other emerging (genetic, robotic, nanotech, neuroscience and information) technologies.

For example, Sir Martin Rees, President of the Royal Society since 2005, predicted in 2002 that “By 2020, bioterror or bioerror will lead to one million casualties in a single event,” asserting that: “Biotechnology is plainly advancing rapidly, and by 2020 there will be thousands-even millions-of people with the capability to cause a catastrophic biological disaster. My concern is not only organized terrorist groups, but individual weirdoes with the mindset of the people who now design computer viruses. Even if all nations impose effective regulations on potentially dangerous technologies, the chance of an active enforcement seems to me as small as in the case of the drug laws.”

The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” and it unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. "Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Oxford University’s Future of Humanity Institute surveyed all participants who had just attended their 2008 Global Catastrophic Risks conference and discovered that over half of responding participants believed that there was at least a 30% probability that a single “engineered pandemic” would kill “at least one million...before 2100”. Over half also projected at least a 10% probability that such a single attack would kill “at least one billion”.

The (MIT) Technology Review 2007 article “The Knowledge” stated " there is growing scientific consensus that biotechnology- especially the technology to synthesize ever larger DNA sequences- has advanced to the point that terrorists and rogue states could engineer dangerous novel pathogens."

How might such a tragedy occur? We need only consider the cases of Unabomber Ted Kaczinski, Atlanta Olympics bomber Eric Rudolph, Oklahoma City bomber Tim McVeigh or France's AZF extortionists. Consider the warped, multi-year persistence of each. And each 'succeeded', often repeatedly, while remaining entirely unidentified until later.

As awful as such a man-made pandemic would be by itself, how might it raise the danger of (self-sustaining/permanent?) totalitarianism?

How would survivors respond to such a man-made pandemic? If the source of the disease was unknown, or identified but not locatable, the panic of uncertain infection and likely interim lawlessness would be supplemented by fear that the unknown or unlocated bioweaponeer(s) would strike again with an even more virulent pandemic. Rumors would almost certainly circulate that a second pathogen with a longer prodrome had already spread, adding to fears of an unknown, unavoidable death. Even if the pandemic’s source was killed or otherwise neutralized, the realization would be that, as Sir Martin Rees has said, “millions” of others were also able to cause equal or greater carnage and mayhem at any time.

How much might this change the priorities of political elites and electoral majorities? To what extent would privacy and political liberty be surrendered in search of safety?

Consider the many historical examples, such as American support for the 1942 internment of Japanese-Americans or the Bush administration interrogation and wiretapping practices after 9/11. The issue here is not the actions taken but rather with the breadth of prolonged popular support they received. This is not a uniquely American experience, as history provides examples from most nations. Most important, fears of recurring man-made pandemics would vastly exceed any of the anxieties from those earlier cases.

Clearly the above scenario questions only one of many dangers from emerging technologies. Nick Bostrom, John Gray, Bill Joy Sir Martin Rees, John Robb and others have offered many other plausible scenarios of extinction, environmental catastrophe, lost human values and other disasters. My question isn’t whether biotechnology presents the greatest risk, but rather whether any super-empowering technology risk warrants preventive actions and/or mitigation preparations now.

What are our responsibilities to take preventive actions or prepare for mitigation of the consequences? Nick Bostrom and others have offered much about valuing the trade-offs in utilitarian terms. But one unknowable warrants special emphasis. If there is an absolute ethical truth, whether a penultimate purpose for humankind or even a deity, and if it is, or could become, knowable to a future generation, future life applying that knowledge could be qualitatively, ethically 'better' than life as we know it.

Highly recommended for further inquiry:

  • Global Catastrophic Risks (ed. by Nick Bostrom & M Cirkovic).
  • Our Final Hour (by Sir Martin Rees).
  • Why the Future Doesn’t Need Us (by Bill Joy).
  • Brave New War (by John Robb).
  • Heresies, Against Progress and Other Illusions (by John Gray)

Both contrary arguments and improvements to the clarification of my question are sincerely appreciated.

Thursday, May 14, 2009

2009 H1N1 Prospects

See John Robb's new post at http://globalguerrillas.typepad.com/johnrobb/2009/05/chance-to-mutate.html. He raises an important question about swine flu mutation risks. This question is also being raised by emminent epidemologists and virologists.

John is the author of the outstanding NYT bestseller, Brave New War, describing how super-empowering technologies will change geopolitics and global security. Buy this book.

As an example of what John's new post questions, the 1918 flu was often called "the three day flu" because it was so consistently mild during its first pass.

Regarding John's question, how might I find the frequency with which past new flu strains became more lethal as they mutated? Both the WHO & CDC have emphasized this danger but neither has quantified the recorded historical frequency despite having data on (dozens?) of past new strains.

Another interesting historical question, one which might yet have immense policy significance: if, as the WSJ recently reported, the best science then suggested that the 1970's swine flu virus had "only" (sic) a 2-20% chance of becoming a global pandemic, wasn't President Ford's much maligned aggressive response a most important best choice given the information then available?