Aug 062016
 

The world continues to be abuzz with talk about the mixed promise and peril of artificial intelligence, both narrow and general. One of the specific concerns is that the robots are coming for our jobs. Fears of technological unemployment are not new, but this time it’s different. This time the machines are coming for every job.

A key question folks are asking is: which jobs will the robots take first?

The conventional wisdom seems to be that robots will take so-called “low skill” jobs first, and that the “high skill” professions will be safe for somewhat longer (although not much – a decade or two perhaps). But this thinking might actually be backwards. The reason why is that professional work is often high-stakes work. And who do you want doing the job when your property or future or even your life is on the line? I think the answer is clearly machines.

Brain surgeons and anesthesiologists get paid big bucks not just because their skills are so rare and therefore in short supply, but also because people’s lives are on the line. Lawyers are paid handsomely for similar reasons: people’s futures and livelihoods hang in the balance. Corporate executives make a good deal of money because of the high-stakes decisions they make that affect their company’s shareholders and employees. And so on.

The following matrix maps occupations into four quadrants based on a rough assessment of the skill (as measured by educational barriers to entry) and stakes (as measured by the risks to life and property from incompetence) involved:

TU matrix

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cashiers, for example, have already been partly replaced at supermarkets. But the stakes are low, so how much do consumers actually care whether it is a human or a machine doing the job? The answer, I suspect, is not much. The capacity already exists to replace more jobs like this with machines, but the incentive to do so is largely on the supply side.

But for high-stakes work the answer might be very different.

To start, take the example of self-driving cars. Public opinion here already seems fairly clear: these are going to put a lot of people who drive for a living out of work. And not just because it will make taxi fare and other services cheaper, but – very importantly – because once we can trust the machine to with the job, we will overwhelmingly prefer that a machine do the job. Consider: in 2025 once self-driving cars are a mature commercial technology, which would you prefer drive your spouse or child home at 2am – one of their drunk friends, a human taxi driver at the end of a 12-hour shift, or a self-driving Tesla? This is already a major line of Tesla’s PR and marketing: autopilot saves lives.

Driving is not a “high skill” job, at least in the sense that the educational barriers to entry for the job are low. I suspect consumers would choose the machine alternative in other similarly “low skill” work where lives and property are at stake as well. Would you rather a person or a robot cook your food? (I trust I don’t need to conjure stomach-turning imagery here). Would you rather be ticketed by a completely unbiased and impersonal robotic police officer, or a human one? (If you don’t happen to be an American with brown or black skin, take a moment to imagine how your answer might be different if you were). From a safety point of view, would you rather human construction workers or robotic ones had built your home? And so it goes down the list.

OK, now what about jobs in the high-high quadrant? These professional jobs are the ones that are ostensibly safer from AI takeover, at least for a while longer than their “low skill” counterparts. But is that really the case? An IBM Watson AI system in Japan just successfully diagnosed a rare genetic disease that had a patient’s human doctors stumped. And beyond just diagnosis, I know that if I were given the choice between a robot’s unerring precision and a human brain surgeon’s shaky hands, I would take the robot every time. The same, I’m afraid, would go for my lawyer or tax accountant (if I had one).

So, contrary to conventional wisdom, once AI are viable alternatives to humans in high-stakes occupations, I suspect there will be swift and overwhelming demand for the machine option – even if those options remain expensive. And if artists, musicians, writers, and scholars like myself remain employable somewhat longer, it may actually have little to do with the inability of machines to match our skills, and much more to do with the fact that the stakes to life and property are so much lower.

Bottom line: if you’re worried about automation putting you out of work, the stakes of your job may matter more than the skill required to do it. And – unintuitively – low-stakes jobs may well be less vulnerable than high-stakes ones. At least for a little while.

May 182016
 

Matrix_1

Photo Credit: The Matrix © 1999 Warner Bros.

The Fidelity Gap

Today the distinction between authentic sensory experiences and the synthesized experiences produced by memory, visualization (i.e. our mind’s eye), and computer simulations is clear. The fidelity of real-world sensory stimuli is incomparably vivid.

But this will not be true for much longer. Sometime later this century computing technologies will almost certainly close the fidelity gap, meaning that they will be capable of rendering totally compelling simulated realities like The Matrix as well as allow us to capture and recall perfect memories of any experiences we have – whether real, imagined, or simulated.

Science and science fiction alike have examined some of the problems this might create. But there a number of potential benefits as well, and they have so far received much less attention than they are due.

Mo VR Mo Problems?

One potential downside of closing the fidelity gap stems from the fact that we human beings are, at least in part, motivated by the differences between “authentic” experiences based on sensory stimuli and “synthetic” experiences that are either remembered, visualized, or artificially simulated. In other words, one reason we get out of bed in the morning is because nothing in our heads or on a screen compares to the real world. Yet.

But what happens when technology makes remembered, visualized, and simulated experiences just as vivid – or more vivid – than the real thing? In the relatively near future computing technology is almost certainly going to eliminate the fidelity gap for memory (via artificial total recall) as well as for both visualization and simulation (via fully immersive virtual reality).

In some scenarios, individuals become lost, trapped, or simply addicted to virtual reality. In other scenarios, entire civilizations retreat into virtual reality.  The underlying cautionary theme is to beware the consequences of allowing virtual reality to become so captivating and rich that it offers more utility (variety, pleasure, control, etc.) than the real world.

Another potential downside is that the same technologies that enable us to close the fidelity gap will necessarily enable mind-reading as well, and so threats of abuse and oppression stemming from invasions of privacy are cause for serious concern.

This is well-trodden ground, so I won’t rehash the details here.

Virtual Reality, Real Opportunities

The potential benefits of closing the fidelity gap have received less attention, so let me highlight them in several broad categories.

CONTINUE READING…

Feb 262016
 
Ostrich

Photo Credit: National Geographic “Animal Myths Busted“.

I am an environmental social scientist by training, and over the last several years I have developed a rather unconventional set of views about the future of nature. The more I have examined and considered the environmental implications of technological change myself, the more I have come to realize how poorly these implications seem to be understood or even recognized by others across the environmental disciplines.

In short, I have learned that we are likely to see the arrival of technologies within just a few decades that to uninformed observers might seem to still lie centuries or millennia away. Science fiction, in other words, will become science fact far sooner than most of my colleagues would dare imagine. And on the whole the implications for the environment are not just extraordinary, but extraordinarily positive: problems that seem utterly intractable today may become solvable in the relatively near future.

Unfortunately, the blindness of the environmental disciplines to the tsunami of radically disruptive technological change barreling toward us is a pristine example of how otherwise highly-educated and intelligent people can arrive with gross overconfidence at spectacularly false conclusions when their reasoning is based on bad information or invalid assumptions.

I am very deeply concerned about this state of affairs because imminent technological change raises a wide range of environmental policy, planning, and ethics questions that I think we must begin to examine very carefully.

So to be clear, let me summarize my line of reasoning here at the outset:

  • Technological change is accelerating, and is being compounded most especially by advances in computing.
  • The implications of technological change over the course of this century are staggering.
  • Technologies that seem thousands of years away to uninformed observers actually lie only a few decades ahead.
  • Intelligent machine labor in particular is going to be a fundamental game-changer, but miniaturization and biotech will be a big deal too.
  • The implications have the potential to be hugely positive for the environment because they may render previously intractable problems solvable.
  • The environmental disciplines are either shamefully oblivious to, or are in near-total denial of, the technological prospects of the next several decades.
  • As a result, the environmental scenarios on decadal scales or longer that are presented as plausible forecasts by the scientific community are, to the contrary, profoundly unrealistic – and unduly pessimistic besides.
  • Some of this ignorance is genuinely innocent, although that is an increasingly unacceptable excuse.
  • Some of this ignorance may be willful, and that is a serious concern with grave consequences for policy and planning.
  • There are a number of good reasons to be wary of new technologies based on our historical experiences.
  • There also seem to be a number of other more cynical reasons to dismiss the potential of technology to redress environmental problems.
  • Regardless, there appears to be an increasingly cult-like antipathy toward technology across the environmental disciplines – as well as within the environmental movement that they inform – that is based not on reason but on a reflexive demonization and dismissal of “techno-fixes”.
  • As the potential of technology to solve major environmental problems becomes steadily clearer to other disciplines such as computer science and engineering, and eventually to the public, the willful ignorance and reflexive opposition toward technology within the environmental disciplines risks becoming a form of outright denialism.

 

CONTINUE READING…

Jan 262013
 

The Long Term

In recent weeks I have been perusing the seminars and written works of The Long Now Foundation, whose stated mission is “to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common.”

This certainly seems an admirable goal, and the foundation’s projects do a superb job of melding science and engineering together with artistic and cultural sensibilities – a prime example of which is the 10,000 Year Clock, a 200-foot-tall multi-million-dollar monument being built inside a cave in a remote western Texas mountain that, as the name implies, is designed to mark the passage of time for the next ten millennia.

The Long Now Foundation emphasizes the importance of our perception of the passage of time, and indeed our cultural conceptions of the passage of time. (J. Stephen Lansing, for example, shares insights into the role that language plays in shaping our perception and conception of time by discussing the case of Polynesian and Austranesian languages that do not have tenses but instead construe time in “multiple concurrent cycles”). More specifically, The Long Now Foundation asserts that long-term thinking is in short supply, and that in the face of accelerating technological change our culture needs more rather than less of it if we are to avoid both imperiling and impoverishing future generations.

CONTINUE READING…

Oct 172012
 

I did an AMA (Ask Me Anything) session on Reddit when I first launched Letter to a Conservative Nation in February, and I promised that I would do another as we neared the election.

I had a range of interesting questions last time, especially from right-Libertarians who challenged my thesis that the conservatives tend to have a narrower sphere of compassion than liberals, and that the common term we use to describe those who are not sufficiently inclusive of others in their interest-maximizing calculus is selfish. I’m looking forward to fielding these and other challenges again!

I also had a number of interesting questions about on-demand publishing, electronic publishing, and the future of the book publishing industry. I encourage anyone who has ever been interested in writing a book strongly to consider on-demand publishing – which is, in all honesty, a 21st Century form of self-publishing. Today the process is simple, cheap, and delivers incredibly high-quality finished products.

I’ll be giving away free electronic copies of the book in ePub format (should work on most electronic book readers) to the first 20 people who get in touch with me.

Thanks, and ask away!

– Adam

Jul 092012
 

I’ve recently been debating a handful of people about the validity of mind-body dualism, most notably the Cartesian Dualism proposed by Rene Decartes (of cogito ergo sum fame: I think, therefore I am).

The modern brain sciences leave little room for doubt that conscious minds are entirely a product of physical brains, but the Cartesian notion that consciousness is somehow a phenomenon that exists independently of its physical substrate has proven remarkably persistent.  It is odd how insistently – even desperately – some people (including some extraordinary philosophers) cling to the idea that consciousness is somehow a magical phenomenon that transcends the physical world.  The experiences of consciousness – the deep azure blue of a summer sky, the crisp taste of an orange, the quiet contentment of sitting beside a fire – are stunning in their richness and variety, but must they also be something more than just jostling atoms, sparks of energy and patterns of information in order to be beautiful?  Must they transcend the world of flesh and dust in order to be miraculous?

Those who cling to dualism seem to need to believe that consciousness is be something more than just the regular, old, dirty, grubby, undignified stuff of matter, energy and information.  This need strikes me as very much like the need to believe in the supernatural in general: in gods, in fate and karma, in higher powers and purpose, in magic.

If science were to show that consciousness is just another emergent property of complex systems, would our egos be bruised in the same way that they were with scientific discovery that the Earth is not the center of the universe?  Human beings seem to have an innate need to feel special, and among philosophers the attachment to dualism despite any supporting evidence (and much evidence to the contrary from the modern brain sciences) strikes me as egotistical in precisely the way that other forms of faith and superstition so often can be.

Jul 012012
 

THE FUTURE OF MORALITY

(jump to full essay)

Can we have a science of morality?

What is right and what is wrong? What are good and evil? These questions about the origins of morality, ethics and justice have been the subject of philosophy for millennia, but never science. Unlike philosophy, science demands that any claims made about the universe be not only logically consistent, but supported by testable evidence as well. A science of morality would therefore require empirical data across the full range of relevant spatial scales, from the micro-level of the individual person to the macro-level of our entire species. An insurmountable obstacle up until now has been that data at the micro-level are inaccessible, locked within the minds of individuals. For more than a century the prevailing view among philosophers and scientists alike has been that these data will remain forever out of reach – that the inner workings of the mind are inherently subjective, with no prospects of ever being observable. So while a great deal of work can be done by making micro-level inferences about individual minds from macro-level observations of human behavior, scholars have so far been critical of any notion that a science of morality might emerge alongside psychology, sociology, anthropology, and the other social and behavioral sciences. But a handful of thinkers believe that this may soon change as a result of the exponential progression of technology.

One of these thinkers is Sam Harris. In his 2010 book, The Moral Landscape, Harris makes a strong case for a future science of morality. He argues that morality is a function of wellbeing and suffering, and that because wellbeing and suffering are a product of our neurological machinery, morality must therefore be measurable at the level of brain. On this view, a science of morality is both a logical and an inevitable extension of the neurological and mental health sciences.

In this essay I am going to argue that although Harris’s Moral Landscape is based upon a futuristic vision of the sciences and technologies related to the human brain, this vision is not nearly futuristic enough. Harris’s arguments are not wrong per se, but rather are incomplete because like other cognitive scientists he is still implicitly basing his analysis on the assumption that human biology is immutable. Harris is right to assume that the science of morality will be a brain science, but he is wrong to assume that in the future human brains will be no different than they are today. By the end of this century we will have the technology to dramatically modify how our brains work, and the moral implications of re-engineering our minds are nothing short of staggering.

The impending availability of empirical data at the level of the brain means that age-old questions of right and wrong, and of good and evil, will become scientific questions in the near future. A science of morality is indeed in the offing. But when we abandon the assumption of biological immutability we open the door to a more fundamental debate than simply what is moral: we can begin to ask what should be moral, and why.

Let me begin by providing some conceptual context for Harris’s Moral Landscape.

CONTINUE READING

Jul 012012
 

Welcome to my new site!  The purpose of changing over to a blog format is so that I can more easily add new content and invite participation and comments.  The focus of the site will continue to be on things that I find intellectually interesting that lie outside the scope of my formal research work, such as moral philosophy and futurism.  Thanks for stopping by!

– Adam