January 25, 2013
In recent weeks I have been perusing the seminars and written works of The Long Now Foundation, whose stated mission is “to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common.”
This certainly seems an admirable goal, and the foundation’s projects do a superb job of melding science and engineering together with artistic and cultural sensibilities – a prime example of which is the 10,000 Year Clock, a 200-foot-tall multi-million-dollar monument being built inside a cave in a remote western Texas mountain that, as the name implies, is designed to mark the passage of time for the next ten millennia.
The Long Now Foundation emphasizes the importance of our perception of the passage of time, and indeed our cultural conceptions of the passage of time. (J. Stephen Lansing, for example, shares insights into the role that language plays in shaping our perception and conception of time by discussing the case of Polynesian and Austranesian languages that do not have tenses but instead construe time in “multiple concurrent cycles”). More specifically, The Long Now Foundation asserts that long-term thinking is in short supply, and that in the face of accelerating technological change our culture needs more rather than less of it if we are to avoid both imperiling and impoverishing future generations.
During my masters program at the University of Michigan’s School of Natural Resources and Environment I received two years of formal scientific training in how to understand the sustainability and resilience of complex systems, both in theoretical and empirical terms. This training placed much the same importance on long-term thinking as The Long Now Foundation, and for similar reasons: both are predicated on an underlying set of assumptions about the finitude and fragility of our world.
In this essay I am going to revisit three of these assumptions, and ask which if any are likely to continue to hold true for the indefinite future – say, for the next 10,000 years. I hope to show that they actually reduce to a single assumption that will inevitably – and rather quickly – prove to be false: that biology, whether human or nonhuman, is immutable. I should hasten to add, however, that this emphasizes rather than diminishes the importance of long-term thinking and environmental conservation.
July 1, 2012
Can we have a science of morality?
What is right and what is wrong? What are good and evil? These questions about the origins of morality, ethics and justice have been the subject of philosophy for millennia, but never science. Unlike philosophy, science demands that any claims made about the universe be not only logically consistent, but supported by testable evidence as well. A science of morality would therefore require empirical data across the full range of relevant spatial scales, from the micro-level of the individual person to the macro-level of our entire species. An insurmountable obstacle up until now has been that data at the micro-level are inaccessible, locked within the minds of individuals. For more than a century the prevailing view among philosophers and scientists alike has been that these data will remain forever out of reach – that the inner workings of the mind are inherently subjective, with no prospects of ever being observable. So while a great deal of work can be done by making micro-level inferences about individual minds from macro-level observations of human behavior, scholars have so far been critical of any notion that a science of morality might emerge alongside psychology, sociology, anthropology, and the other social and behavioral sciences. But a handful of thinkers believe that this may soon change as a result of the exponential progression of technology.
One of these thinkers is Sam Harris. In his 2010 book, The Moral Landscape, Harris makes a strong case for a future science of morality. He argues that morality is a function of wellbeing and suffering, and that because wellbeing and suffering are a product of our neurological machinery, morality must therefore be measurable at the level of brain. On this view, a science of morality is both a logical and an inevitable extension of the neurological and mental health sciences.
In this essay I am going to argue that although Harris’s Moral Landscape is based upon a futuristic vision of the sciences and technologies related to the human brain, this vision is not nearly futuristic enough. Harris’s arguments are not wrong per se, but rather are incomplete because like other cognitive scientists he is still implicitly basing his analysis on the assumption that human biology is immutable. Harris is right to assume that the science of morality will be a brain science, but he is wrong to assume that in the future human brains will be no different than they are today. By the end of this century we will have the technology to dramatically modify how our brains work, and the moral implications of re-engineering our minds are nothing short of staggering.
The impending availability of empirical data at the level of the brain means that age-old questions of right and wrong, and of good and evil, will become scientific questions in the near future. A science of morality is indeed in the offing. But when we abandon the assumption of biological immutability we open the door to a more fundamental debate than simply what is moral: we can begin to ask what should be moral, and why.
Let me begin by providing some conceptual context for Harris’s Moral Landscape.