Wednesday, February 17, 2010

Anihilism and Open Society

The following argument is a syllogism, proposing that a comprehensive basis for ethical choices can be derived from what we today don't know:

There is so much we don’t yet know. There are such fundamental questions and incomplete answers in physics, cosmology, neuroscience, psychology, etc. Consider the extent to which the scientific discoveries of the last decades were unexpected and that former scientific certainties are now verifiably incorrect. We must reasonably assume that we cannot now know how much we don’t know.

Second, given this evidence of ignorance, it is simply prudent to assume for now that there might be a knowable and actionable higher purpose for humanity. Not because we know there is one, but because we cannot possibly now have enough understanding to definitively presume otherwise. For example if my dog, or an ant, or we ourselves for that matter, can’t fully understand human consciousness, how can we today presume that we so fully understand all possible paradigms of theism as to reasonably assume them all disproven?

And humanity’s purposor(s) certainly need not be omnipotent, omniscient and eternal, like in the sacred texts of old. It/they need only be more sapient than we are currently. Given the level at which new understanding has recently surprised us, who would today consider that such a high standard? And they need not have literally created our higher purpose; they may instead simply add enough to our knowledge that we can understand and supportively act upon its true nature.

If we don’t know how many billions of solar systems there are, how do we have any certainty about the probability of more technologically advanced interstellar travelers? If we can’t explain replicated physics observations without ‘stringing’ multiple universes, how do we have any certainty that we accurately perceive a unitary WYSIWYG world?

Just one example of currently plausible ‘Godless’ more knowledgeable life forms is Nick Bostrom’s Philosophical Quarterly essay ‘Are You Living in a Computer Simulation?’, which I resummarize as:

IF human (or ANY other sapient being) technology is capable of eventually creating computer simulations (OR other complex perceptions) which appear to be as real as our current experiences,

And IF there is no PERSISTENT (essentially 100%) obstacle to human (OR other self-aware,
technological) beings surviving until that technology is achieved,

Then EITHER those controlling use of that technology would almost never (essentially nil %)
choose to create such artificial perceptions (WITH the parameters we now perceive),

OR the odds that we are not living in such an artificial perception are infinitely low.

For example, the continuing world we (I?) experience is, in terms of probability, one observation. If sufficiently advanced technological society(s) would choose to create ‘only’ one thousand (such defined) artificial perceptions (i.e. computer simulations), then the probability that we (I?) are NOT now experiencing one of those simulations as our reality is one tenth of one percent.

Furthermore, even if we are the only beings capable of self-awareness anywhere in the Universe, or alternatively if Deism is correct and all other such beings will forever choose to not interfere in our experience under any circumstances, so much is now unknown that we cannot yet know whether there is an actionable higher purpose which we can someday determine alone. It is even possible that such truth may already be known by some people and the rest of us are still unaware, or unconvinced, of what they have already discovered.

Third, If a knowable, actionable higher human purpose is truly plausible, this alone provides the only proper basis for all our choices. For the true (opportunity) cost (to the universe) of our having such a purpose and not finding it, or of unnecessary delay in finding and following it would be, quite literally, beyond our current comprehension. In the event that such a higher purpose doesn’t exist, life is fundamentally meaningless and the incremental cost of having looked for it in vain would be trivial (most likely negative, for reasons beyond the scope of this posting; if unsure, simply ask your mother to explain).

Regardless of the probabilities involved, which I argue are now fundamentally unknowable, the probability-adjusted present value of acquiring, and acting according to, that possible absolute human purpose(s) must exceed the value of any other outcome. This derives only from the presumption that a qualitative difference in the value of a more purposeful live might be infinitely better (not necessarily preferable, as we must assume we don’t currently know the basis for better living, but rather ethically better for humanity OR the Universe). Then simple arithmetic concludes this syllogism, which can be viewed as an expansion of Pascal’s Gambit.

In summary, Anihilism is basing ethical choices from belief that moral nihilism might be inaccurate. This one assumption calls us to specific actions, to aggressively search for, and then seek consensus to act from, the best knowable and actionable basis for human ethics.

The ethical implications of Anihilism, for a Humanist, a traditional Utilitarian, or even a Libertarian, and especially for an Agnostic or a 'Liberal Theist', are identical. Wouldn’t the only prudent choice be those actions which best promote the identification and broad evaluation of possible higher purposes (including search for external purposors)? If correct, this would include education for all who could benefit the search, open inquiry, leisure time for inquiry (antimaterialism), etc. Central to such application would be defense of open society, to preserve the free and open exchange of observations, ideas and questions.

But such an ethical system would not be without controversy. One of the more troubling possible ethical implications regards the issues of eugenics. Another example is issues of personal liberties, such as a perceived liberty to engage in sloth or to waste limited resources on hedonistic pursuits. As any absolutism, proactive anti-nihilism is not a risk-free, cost-free basis for ethical decisions.

But IF these three (bolded above) core assumptions are valid, what other basis for actions could be ethically valid?


Note- This post is a work in progress and your suggestions, questions, concerns are sincerely appreciated. I am intensely interested in feedback regarding both my core argument and ways to improve the clarity of my explanation.