Saturday, December 5, 2009

Emerging Technologies and Tyranny Roulette

(Draft in Progress- Please see Below)

Our children could live happy, fulfilling lives even if we become a repressed society much like Orwell’s 1984. There would be sacrifices, but they might still be truly content. But that is not what any of us would want. Why accept that risk?

My argument is not that an Orwellian dystopia is inevitable, only that its likelihood during our children’s lifetime is at least 20%. Not 50/50; I propose merely one in five. Like Russian roulette and perhaps with only one bullet.

All it takes is fear. Tyrants offer security, albeit conditional and therefore false security. How many of us will accept their offer? Neither Hobbes nor Maslow can show us how we, and our neighbors, would respond to intense, unrelenting fear. And fear is possible.

Also, tyranny never arrives properly labeled. It almost always builds gradually, starting with "limited, temporary" powers.

Be forewarned: there is not a single original idea in this post. Not one. My concern of an oppressive dystopia is based on only four assertions which, like all my supporting examples, have been clearly presented by others. This is all old hat:

1. The technology needed to develop an untreatable contagious disease, or other catastrophic bioviolence, is expanding quickly and is also becoming available to “tens of thousands, perhaps millions”, of people. The knowledge, equipment and other resources needed are so
accessible that a small group, or even a lone individual, could accomplish all the necessary tasks.

2. Given this technology, widely available today and growing in both capability and accessibility, and countless examples of people's behavior in recent history, there is a significant (>50%)likelihood that bioviolence will kill over 10,000 people in the next thirty years. There is also a comparable risk that at least 1,000 of those victims will die from a single attack whose perpetrator(s) will remain anonymous or at least unapprehended.

3. If this happens, how would the frightened public respond? Such a level of dread and helplessness would be beyond any of our experiences. History has shown how quickly public opinion pluralities can swing, and how many pundits and politicians would arise to exploit those with heightened fears. Clearly, wouldn't there be at least a 40% probability that these public fears would lead to laws relinquishing privacy, open inquiry and free expression, and other liberties, in pursuit of perceived greater safety? Historical examples come from so many countries, and such recent decades.

4. Unlimited centralized power has proven most dangerous. Certainly not always, but often; and the more 'absolute' the power attained, the more likely the corruption of the empowered. Fear-driven relinquishment of privacy, open inquiry and free expression could easily evolve into a sustained tyranny. Historical examples abound. Furthermore, the surveillance technology to allow a government to sustain such oppression has now expanded dramatically and these capabilities, along with impending psychopharmacology and neuroscience capabilities, will grow much greater.

Sustained tyranny is itself an existential risk. In our childrens' lifetime it is the most likely existential risk. What steps should we, their parents, undertake?

Is the Technology Both Real and Accessible?

Several world-class scientists and other knowledge leaders have shared their concerns about emerging “super-empowering” technologies. Focusing on biotechnology, the first of the often cited genetic, robotic, nanotech and Infotech technologies of concern, examples include:

1. Sir Martin Rees has been President of the British Royal Society since 2005. Cosmologist Rees predicted in 2002 that “By 2020, bioterror or bioerror will lead to one million casualties in a single event,” asserting that: “Biotechnology is plainly advancing rapidly, and by 2020 there will be thousands-even millions-of people with the capability to cause a catastrophic biological disaster. My concern is not only organized terrorist groups, but individual weirdoes with the mindset of the people who now design computer viruses. Even if all nations impose effective regulations on potentially dangerous technologies, the chance of an active enforcement seems to me as small as in the case of the drug laws.”

2. The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” which unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities cited were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral
agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. “Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

3. Oxford University’s Future of Humanity Institute surveyed all participants who had just attended their 2008 Global Catastrophic Risks conference and found that over half of responding participants believed that there was at least a 30% probability that a single “engineered pandemic” would kill “at least one million...before 2100”. Over half also projected at least a 10% probability that such a single attack would kill “at least one billion” people.

4. The (MIT) Technology Review 2007 article “The Knowledge” stated "There is growing scientific consensus that biotechnology — especially, the technology to synthesize ever larger DNA sequences — has advanced to the point that terrorists and rogue states could engineer dangerous novel pathogens."

These concerns are also shared outside the scientific community. As called for by the 9/11 Commission, Congress established a commission on the “Prevention of WMD Proliferation and Terrorism”. The very first sentences of the commission’s December, 2008 report were “The Commission believes that unless the world community acts decisively and with great urgency, it is more likely than not that a weapon of mass destruction will be used in a terrorist attack somewhere in the world by the end of 2013. The Commission further believes that “terrorists are more likely to be able to obtain and use a biological weapon than a nuclear weapon.” Their report later added “as the anthrax letter attacks of autumn 2001 clearly demonstrated, even small-scale attacks of limited lethality can elicit a disproportionate amount of terror and social disruption”.

Would Someone Actually Use Such a Capability?

How will these new weapons be used? The key is who will decide.

As political theorist John Gray wrote in 2002, "The development and spread of new weapons of mass destruction is a side effect of the growth of knowledge interacting with primordial human needs... It will occur haphazardly, as part of competition and conflict among states, business corporations and criminal networks."

People don’t decide collectively, Wittingly or otherwise, they communicate cooperation to support what they perceive as their individual self-interest. These perceptions are consistently driven by fear or greed, or a self-deception founded in fear or greed. Where will fear and greed drive the newly super-empowered? More importantly, where will it lead the most dangerous of the many newly empowered?

Historical examples abound. How many years did Unabomber Ted Kazcynski devote to his bombs? What could he have developed in that time with tomorrow's (today's?) biotechnology? What could Oklahoma City bomber Tim McVeigh have developed if he spent his bomb research time applying open source genetic engineering? How many (dozens of) militant religious fundamentalist groups are preparing for their 'Christian', 'Islamic', "Hindu', 'Judaic' armegeddons by enhancing one of God's plagues? What if today's Charlie Manson collected his longing followers at UC-Berkley?

(Incomplete- text to follow later)

How would the Frightened Public, and Political Elites, React?

Many commentators have noted that technology once discovered cannot be relinquished. They are almost certainly correct in our 2009. Would they also be correct in Orwell's 1984?

How much would majorities of frightened voters sacrifice for greater security? How might the most opportunistic of politicians exploit voters' fears? Historically, how far have voters swung under much less scary circumstances?

Let's begin with the historical record...

Now consider how much worse the impending traumas might (>20% likelihood) be?


(Incomplete- ibid)

What Technology Really Exists for a Fear-Empowered Despot?




How might a tyranny establish and maintain itself if ever given the oportunity?


(Incomplete- ibid.)

Can't We Wait?

Time constraints are especially challenging because they are, in this case, essentially unknowable. It may be literally impossible to know when it would be too late to act except when it is already too late to act. The accelerating biotechnology research efforts will soon lead to unanticipated new capabilities. Without possibly knowing what these new capabilities will be, or even how often their discoveries will be presented outside the sponsoring organization, it is impossible to know how long an effective response would take to implement. What is now known is that any policy which can control bioweapon acquisition must now include essentially every country and substate participants including isolated individuals. Absent the most extreme solutions, it is difficult to see how such policies could be effectively implemented in decades let alone years.

It's Time to Act

Our children deserve specific precautionary steps and the time to act has come. As nations but also as universities, firms and individuals, we must think before we further spread WMD knowledge.

We must also prepare for a possible challenge to our cherished liberties. The best single defense of American liberties is not either an anti-ballistic missle system or the Second Ammendment. The best single (albeit no single defense is solely sufficient) defense is quality education, both in our schools and beyond schools. It is the 'middle third' in political involvement and often in political knowledge, who will control our children's fate. Their level of misinformation and gullibility is well documented.

Defending liberty will never safely rely on any one solution. Our children's liberty certainly deserves defense in depth. This requires thoughtful analysis of how an open society might mitigate against tyranny even if privacy were substantially curtailed. An example, although certainly not recommended as a completed, practical solution, is David Brin's The Transparent Society.

These issues also raise important questions about 'dystopian ethics'. What are we, as individuals or groups, morally obligated to do if we believe that tyranny, or some other impending catastrophe, is possible (or even inevitable)? The most obvious answer is that we should carefully consider that we are probably mistaken. From biblical millenialists to Unabomber Ted Kaczynski, Bruce Ivins and Aum Shinrikyu, others have done great harm because they foresaw world-changing disaster. Open inquiry is the best preventive of such error, but it needs to be open-minded as well as open sourced. Here we must be our brother's keepers, challenging our facts and paradigms and also our receptivenesses.

But truly existential risks compel us to also carefully consider the costs of a 'false negative' error. Accepting that we don't know how much we don't yet know, we cannot now assume that future generations might attain knowledge which makes their lives qualitatively better than our own. They might conceivably 'find God', either figuratively or quite literally. Any 21st century consideration of acceptable existential risks must accept this possibility and with it the unknowable likelihood that future lives and costs should be 'premiumed' rather than 'discounted'. Assume any probability to an infinite future value and the cost/benefit arithmetic is fundamentally simplfied.

Highly recommended for further inquiry:

Global Catastrophic Risks (ed. by Nick Bostrom & M Cirkovic)
Our Final Century (published as Our Final Hour in the US) by Sir Martin Rees
‘Why the Future Doesn’t Need Us’ by Bill Joy
Heresies/Progress and Other Illusions by John Gray
Brave New War by John Robb
‘Biowar for Dummies’ by Paul Boutin
‘The Knowledge’ by Mark Williams, in The (MIT) Technology Review, 3/2006

Draft (Plea)

This is a work in progress. I will be revising this draft as I complete the sections noted incomplete above. Your feedback would be sincerely appreciated. For example, am I incorrect that this danger is real? If this is a plausible (10%) risk to our children's future, how could we better promote thoughtful evaluation of prudent preventive steps?

Friday, July 10, 2009

WMD Typologies

Draft for Feedback

The words we use to classify weapons of mass destruction have done much harm to public consideration of emerging catastrophic risks.

What distinguishes good WMD mitigation policy is not whether the weapon is nuclear, biological or chemical. Furthermore, the current classification is worse than meaningless; it is dangerously misleading. For example, how often have policy analyses comingled a non-contagious biological weapon such as anthrax with a contagious BW such as smallpox, obscuring the unique strategic threats of each?

Bill Joy's seminal 2000 Wired essay "Why the Future Doesn't Need Us" contained the key to a policy-focused typology. Expanding on his "self-replication" distinction, WMD threats are more clearly understood when distinguished both between contained and self-replicating weapons and between selective and non-selective weapons, such as:

Contained Non-Selective (CNS)
Selective Contained (SC)
Self-Replicating Non-Selective (SRNS)
Selective Self-Replicating (SSR)
Lengthy Prodrome (SR-LP)
Gradually Incapacitating (SR-GI)

A contained WMD is one which affects a specific (knowable) area for a specific (knowable) time period. Examples include nuclear weapons, all existing chemical weapons and anthrax and other non-contagious biological weapons. The opposite of a contained WMD is a highly transmissive, contagious disease.

A contained non-selective WMD is one with which the potential deployer cannot effectively select who is killed or incapacitated. The best example of this is a nuclear weapon which kills or sickens everyone in a given area. But this selectivity is often not a simple, binary concept. For example, a specific nuclear weapon has a reasonably quantifiable blast/fire zone and a potential deployer might know that prepared people in a defined area outside that zone can prevent radiation illness with certain precautions (special equipment and/or iodine pills, etc.).

A contained selective WMD allows the potential deployer to effectively select who is killed or incapacitated. One example is a chemical weapon which disperses over a short time and for which selected persons could wear preventive respirators or filtering gas masks. Another example would be a genetically engineered anthrax strain which the deployer can effectively vaccinate against or treat but the intended victims cannot. Literary examples include Frank Herbert's novel 'White Plague' and the plagues of the book of Exodus.

A self-replicating WMD is one which is likely to expand beyond its original impact area through person-to-person transmission or some equivalent process. One example is the smallpox virus which is sufficiently contagious to likely cause widespread transmission beyond the original victims. Two documented historical examples are the black plague's origination as a Crimean War weapon and the British 1770's use of smallpox-infected blankets to decimate Native Americans allied with the French.

As above, a selective self-replicating WMD would allow the potential deployer to select which persons are killed or incapacitated by the self-replicating weapon. An example would be a contagious disease with a vaccine or treatment available to the perpetrator(s) but not to the target victims.

Two special cases are noteworthy because their tactical advantages might increase consideration of a 'preventive first strike' strategy. A long prodrome, the time between when a carrier is first contagious and when visible symptoms appear, especially if greater than eighteen months, could be highly destabilizing in that it could cause a potential deployer to believe (correctly or otherwise!) that they could infect virtually everyone before preventive biocontainment or therapy development had ever begun. Similarly, a gradually incapacitating WMD could present a similar perception that a deployer could cause (universal?) victim passivity or incapability before awareness of the attack, possibly precluding any effective response.

How realistic are these dangers? The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” which unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities cited were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral
agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. “Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Looking forward to the near future when tens of thousands (millions?) possess the above capabilities, what might traditional game theory suggest to a potential WMD developer about risks given such capabilities? If you had a contagious biological weapon with a long prodrome or a self-replicating gradual incapacitator yourself, might you not worry that one or more others could shortly develop and deploy such a weapon? Might someone (incorrectly?) conclude that a preventive first-strike deployment was the less risky strategy in such an unending game of ten thousand (million?) prisoners' dilemma?

This is a working draft and your feedback is sincerely appreciated. Especially appreciated are typology improvements and suggestions of where my ideas need clarification.

As noted in posts below, my primary question is whether emerging catastrophic threats and resulting public panic might lead to greatly diminished human rights. Policing biotechnology may not be possible in our 2009 but could someone not then use BW fear to justify Orwell's 1984? What is the range of likelihoods that this issue will singularly define our children's future? Is a 0.5% risk one which we are entitled to ignore at their peril? Who has honestly considered this scenario and still believes that the risk is below 10%?

Monday, June 22, 2009

Biosecurity and Bruce Ivins

Even if Bruce Ivins was not guilty of the 2001 Anthrax mailings, which is itself a considerable leap of faith, there is still much that is undisputed fact about the Bruce Ivins story as a measure of current biosecurity failings.

It is undisputed that Buce Ivins had USAMARID authority to work alone with select agents such as anthrax. It is also undisputed that Dr. Ivins had sought psychiatric care and that at least one respected professional colleague had filed a criminal complaint about his behavior. It is also undisputed that the FBI had all of this information and yet no action was taken. Didn't USAMARID also have all of the above information?

There are reportedly now 14,000 scientists approved to work with select agents in this country alone. The research sponsor with the greatest resources and incentive for maintaining strong biosecurity was USAMARID, the same sponsor who controlled Dr. Ivins access! How bad, then , is the biosecurity at the least biosecure sponsor?

We know that a properly enforced two-man rule would likely have prevented Dr. Ivins from misusing pathogens in his lab (whether he did or not). But the two-man rule is opposed as costly. Is it unnecessary or just inconvenient?

As biotechnology continues to develop, even more dangerous pathogens will emerge, either through discovery in nature or from genetic engineering. Must we wait until it's too late to propose appropriate biosecurity standards for these inevitable circumstances? As Oxford's Dr. Nick Bostrom observed, trial and error is not an effective approach to existential risks. As a very concerned parent, I sometimes allow myself sarcastic license.

What might a BL-5 biosecurity standard require? I suggest a "five-eye rule" whereby all work with certain pathogens, such as smallpox or a future highly contagious, fatal disease but not all select agents, be performed by two unaffiliated researchers within a lab with 100% videomonitoring by a trained remote observer not affiliated with either researcher. Decontamination egress would be controlled by the remote observer. To reduce costs, the lab observer and the remote observer could be trained in biosecurity but without any post-graduate education.

Ideally, the remote observer would be from a different nation than the research but that level of oversight might need to be part of a 'second round' of reform.

What am I missing? Is the risk tuly remote? If there were a 0.1%/decade risk of a global pandemic, what would be the dollar value of lowering that risk? Given the global social, economic and political implications of a bio-error pandemic, and the resulting fear of another, it must be in the tens of billions. Is it in the trillions?

Your thoughts are welcome. Opposing views are especially appreciated.

Monday, May 18, 2009

Super-Empowering Technologies and Totalitarian Risks

What are our ethical responsibilities for minimizing global catastrophic risks if they could plausibly forever end personal privacy, open inquiry and other cherished human rights?

Many world-class scientists and technologists are sharing their concerns regarding global catastrophic risks from biotechnology and other emerging (genetic, robotic, nanotech, neuroscience and information) technologies.

For example, Sir Martin Rees, President of the Royal Society since 2005, predicted in 2002 that “By 2020, bioterror or bioerror will lead to one million casualties in a single event,” asserting that: “Biotechnology is plainly advancing rapidly, and by 2020 there will be thousands-even millions-of people with the capability to cause a catastrophic biological disaster. My concern is not only organized terrorist groups, but individual weirdoes with the mindset of the people who now design computer viruses. Even if all nations impose effective regulations on potentially dangerous technologies, the chance of an active enforcement seems to me as small as in the case of the drug laws.”

The U.S. National Academies of Science commissioned a 2004 “Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology” and it unanimously concluded that “these (biotech capability) categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The seven capabilities were to:
1. “Render a vaccine ineffective”.
2. “Confer resistance to therapeutically useful antibiotics or antiviral agents”.
3. “Enhance the virulence of a pathogen or render a pathogen virulent”.
4. “Increase transmissibility of a pathogen.”
5. “Alter the host range of a pathogen.”
6. "Enable the evasion of diagnostic/detection modalities.”
7. “Enable the weaponization of a biological agent or toxin”.

Oxford University’s Future of Humanity Institute surveyed all participants who had just attended their 2008 Global Catastrophic Risks conference and discovered that over half of responding participants believed that there was at least a 30% probability that a single “engineered pandemic” would kill “at least one million...before 2100”. Over half also projected at least a 10% probability that such a single attack would kill “at least one billion”.

The (MIT) Technology Review 2007 article “The Knowledge” stated " there is growing scientific consensus that biotechnology- especially the technology to synthesize ever larger DNA sequences- has advanced to the point that terrorists and rogue states could engineer dangerous novel pathogens."

How might such a tragedy occur? We need only consider the cases of Unabomber Ted Kaczinski, Atlanta Olympics bomber Eric Rudolph, Oklahoma City bomber Tim McVeigh or France's AZF extortionists. Consider the warped, multi-year persistence of each. And each 'succeeded', often repeatedly, while remaining entirely unidentified until later.

As awful as such a man-made pandemic would be by itself, how might it raise the danger of (self-sustaining/permanent?) totalitarianism?

How would survivors respond to such a man-made pandemic? If the source of the disease was unknown, or identified but not locatable, the panic of uncertain infection and likely interim lawlessness would be supplemented by fear that the unknown or unlocated bioweaponeer(s) would strike again with an even more virulent pandemic. Rumors would almost certainly circulate that a second pathogen with a longer prodrome had already spread, adding to fears of an unknown, unavoidable death. Even if the pandemic’s source was killed or otherwise neutralized, the realization would be that, as Sir Martin Rees has said, “millions” of others were also able to cause equal or greater carnage and mayhem at any time.

How much might this change the priorities of political elites and electoral majorities? To what extent would privacy and political liberty be surrendered in search of safety?

Consider the many historical examples, such as American support for the 1942 internment of Japanese-Americans or the Bush administration interrogation and wiretapping practices after 9/11. The issue here is not the actions taken but rather with the breadth of prolonged popular support they received. This is not a uniquely American experience, as history provides examples from most nations. Most important, fears of recurring man-made pandemics would vastly exceed any of the anxieties from those earlier cases.

Clearly the above scenario questions only one of many dangers from emerging technologies. Nick Bostrom, John Gray, Bill Joy Sir Martin Rees, John Robb and others have offered many other plausible scenarios of extinction, environmental catastrophe, lost human values and other disasters. My question isn’t whether biotechnology presents the greatest risk, but rather whether any super-empowering technology risk warrants preventive actions and/or mitigation preparations now.

What are our responsibilities to take preventive actions or prepare for mitigation of the consequences? Nick Bostrom and others have offered much about valuing the trade-offs in utilitarian terms. But one unknowable warrants special emphasis. If there is an absolute ethical truth, whether a penultimate purpose for humankind or even a deity, and if it is, or could become, knowable to a future generation, future life applying that knowledge could be qualitatively, ethically 'better' than life as we know it.

Highly recommended for further inquiry:

  • Global Catastrophic Risks (ed. by Nick Bostrom & M Cirkovic).
  • Our Final Hour (by Sir Martin Rees).
  • Why the Future Doesn’t Need Us (by Bill Joy).
  • Brave New War (by John Robb).
  • Heresies, Against Progress and Other Illusions (by John Gray)

Both contrary arguments and improvements to the clarification of my question are sincerely appreciated.

Thursday, May 14, 2009

2009 H1N1 Prospects

See John Robb's new post at http://globalguerrillas.typepad.com/johnrobb/2009/05/chance-to-mutate.html. He raises an important question about swine flu mutation risks. This question is also being raised by emminent epidemologists and virologists.

John is the author of the outstanding NYT bestseller, Brave New War, describing how super-empowering technologies will change geopolitics and global security. Buy this book.

As an example of what John's new post questions, the 1918 flu was often called "the three day flu" because it was so consistently mild during its first pass.

Regarding John's question, how might I find the frequency with which past new flu strains became more lethal as they mutated? Both the WHO & CDC have emphasized this danger but neither has quantified the recorded historical frequency despite having data on (dozens?) of past new strains.

Another interesting historical question, one which might yet have immense policy significance: if, as the WSJ recently reported, the best science then suggested that the 1970's swine flu virus had "only" (sic) a 2-20% chance of becoming a global pandemic, wasn't President Ford's much maligned aggressive response a most important best choice given the information then available?

Thursday, April 2, 2009

Inalienable Reality

Sustainable rights will reflect inalienable reality. Regardless of whether they should, they inevitably will.

Schoolchildren studying basic anthropology learn that sustainable social norms and institutions must be consistent with the culture’s values, environment and technology. Either they are consistent, they adapt or 'they' perish.

As technological change accelerates in often unexpected new directions, empowering nations, non-state players and even individuals with unprecedented genetic, robotic, infotech and nanotech capabilities, at some point we imperil those children as we ignore that basic, truly inalienable, reality.

This fact must eventually change the sustainability of personal privacy and liberty as we now know and cherish them. The question is not whether this argument sounds like a Luddite view which we dislike; our children's futures depend on whether this constraint is ever an objective reality and whether we are now approaching that point in time.

As political theorist John Gray wrote in 2002, "The development and spread of new weapons of mass destruction destruction is a side effect of the growth of knowledge interacting with primordial human needs... It will occur haphazardly, as part of competition and conflict among states, business corporations and criminal networks."

I would add that in our new world of globally disseminated technologies, it will not be sufficient that people abstain from super-empowered violence unless we globally, universally abstain. In a world with super-empowered individuals, the terms would be 'unanimity or catastrophe'. As understatement: homo rapacious has not yet done unanimity well.

The remaining question becomes one of urgency. Is this a 21st century danger when it has never yet been a problem before? As to just the biotechnology risks, consider this unanimous 2004 assessment sponsored by The National Academies of Science: “These categories represent experiments that are feasible with existing knowledge and technologies or with advances that the Committee could anticipate occurring in the near future.” The NAS reports' seven capabilities are to:
  1. “render a vaccine ineffective."
  2. “confer resistance to therapeutically useful antibiotics or antiviral agents”.
  3. enhance the virulence of a pathogen or render a pathogen virulent”.
  4. “increase transmissibility of a pathogen.”
  5. "alter the host range of a pathogen.”
  6. “enable the evasion of diagnostic/detection modalities.”
  7. “enable the weaponization of a biological agent or toxin."

What would this mean? Would thousands of individuals and small groups soon become capable of anonymously releasing a universally drug resistant tuberculosis, a highly contagious 'bird' flu or a genetically selective plague? Is the National Academies of Science correct that this can be accomplished "with existing knowledge and technologies or with advances...in the near future”?

Who would do such a thing? Surely this world of six to eight billion people might eventually include one or more microbiologically-capable Ted Kaczynski, Tim McVeigh or Eric Rudolph? Is AZF building a microbiology lab where they may unleash what Royal Society President Sir Martin Rees aptly calls a "bioerror"?

For that matter, might Big Pharma or aspiring little pharma or a rogue state bioweapons lab themselves make a contagious, lethal bioerror? Is it possible that a US biodefense lab, whether private, academic or USAMARID, might (again?) employ a scientist who would release a deadly pathogen? Might it be a contagious disease next time?

If so, what does history, especially this last decade, indicate about how endangered political elites and frightened voters might react? Regarding our political rights, how low would they go?

What am I overlooking or oversimplifying? Are there technical limitations which the NAS Special Committee has underestimated? Are existing regulatory controls sufficient? Can we assume that the people with this new knowledge are universally committed to preventing misuse?

Alternatively, what can be done to build political will for addressing this issue?

Sunday, March 29, 2009

The Technium Discussion

Kevin Kelly has a good, new discussion about technology risks at http://www.kk.org/thetechnium/archives/2009/03/reasons_to_dimi.php.

His post lists four arguments against technology, which he summarizes as: "contrary to nature...(contrary to) humanity..., (contrary to) technology itself "and "because it is a type of evil and thus is contrary to God".

He summarizes the 'contrary to technology itself 'argument as:

"Technology proceeds so fast it is going to self-destruct. It is no longer regulated by nature, or humans, and cannot control itself. Self-replicating technologies such as robotics, nanotech, genetic engineering are self-accelerating at such a rate that they can veer off in unexpected, unmanageable directions at any moment. The Fermi Paradox suggests that none, or very few civilizations, escape the self-destroying capacity of technology."
That statement immediately reminds me of Bill Joy's Why the Future Doesn't Need Us, John Robb's Brave New War and John Gray's Straw Dogs. As John Gray wrote in 2002: "The development and spread of new weapons of mass destruction is a side effect of the groculdwth of knowledge interacting with primordial human needs. That is why, finally, it is unstoppable. The same is true of genetic engineering...It will occur haphazardly, as part of competition and conflict among states, business corporations and criminal networks”. And so it may well "veer off in unexpected, unmanageable directions", as Kevin paraphrased. But as the AIG risk manager undoubtedly once added, 'what's the worst that realistically could happen'.

Much of the discussion is strongly supportive of technology so his post offers viewpoints contrary to my own. Unfortunately, much of the discussion focuses on inconveniences of emerging technologies when the urgent issues are existential risks and sustainability of our core liberties.