geert lovink on 15 Feb 2001 04:59:18 -0000


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Technology, the double-edged sword


(Below a dialogue between the Age of the Spirtual Machine author Ray
Kurzweil and the director of the MIT computer science department, Michael
Dertouzos. The debate is part of an ungoing saga about Bill Joy's warnings,
published in Wired last April, about nano technology research getting out of
control. The question for me would be why passions in the US about a
potentially dangerous future technology are so high whereas
corporate-governmental takeover of the Internet right now is drawing so few
crowds. Can only the future and past be full of horror, not the present?
/geert)

From: MIT Technology Review

   http://www.technologyreview.com/magazine/jan01/dertouzoskurzweil.asp

 January/February 2001

KURZWEIL vs. DERTOUZOS

By Ray Kurzweil and Michael Dertouzos

 In our September issue, Michael Dertouzos wrote a column, "Not by Reason
 Alone," that took Bill Joy of Sun Microsystems to task for a piece Joy
 had written in Wired. In his Wired article, Joy argued that humanity
 should renounce certain lines of research, including nanotechnology,
 because of the dangers they pose. Dertouzos argued that Joy's view was
 flawed because his predictions were based on reason which, taken alone,
 is an inadequate guide to the future. Dertouzos' column drew an
 impassioned response from Ray Kurzweil, author of The Age of Spiritual
 Machines. We print Kurzweil's letter and Dertouzos' rejoinder.

 RAY KURZWEIL

 Although I agree with Michael Dertouzos' conclusion in rejecting Bill
 Joy's prescription to relinquish "our pursuit of certain kinds of
 knowledge," I come to this view through a very different route. Although
 I am often paired with Bill Joy as the technology optimist versus Bill's
 pessimism, I do share his concerns about the dangers of self-replicating
 technologies. Michael is being shortsighted in his skepticism.

 Michael writes that "just because chips... are getting faster doesn't mean
 they'll get smarter, let alone lead to self-replication." First of all,
 machines are already "getting smarter." As just one of many contemporary
 examples, I've recently held conversations with a person who speaks only
 German by translating my English speech in real time into human-sounding
 German speech (by combining speech recognition, language translation and
 speech synthesis) and similarly converting their spoken German replies
 into English speech. Although not perfect, this capability was not
 feasible at all just a few years ago. The intelligence of our technology
 does not need to be at human levels to be dangerous. Second, the
 implication that self-replication is harder than intelligence is not
 accurate. Software viruses, although not very intelligent, are
 self-replicating as well as being potentially destructive. Bioengineered
 biological viruses are not far behind. As for nanotechnology-based
 self-replication, that's further out, but the consensus in that community
 is this will be feasible in the 2020s, if not sooner.

 Many long-range forecasts of technical feasibility in future time periods
 dramatically underestimate the power of future technology because they
 are based on what I call the "intuitive linear" view of technological
 progress rather than the "historical exponential" view. When people think
 of a future period, they intuitively assume that the current rate of
 progress will continue for the period being considered. However, careful
 consideration of the pace of technology shows that the rate of progress
 is not constant, but it is human nature to adapt to the changing pace, so
 the intuitive view is that the pace will continue at the current rate. It
 is typical, therefore, that even sophisticated commentators, when
 considering the future, extrapolate the current pace of change over the
 next 10 years or 100 years to determine their expectations. This is why I
 call this way of looking at the future the "intuitive linear" view.

 But any serious consideration of the history of technology shows that
 technological change is at least exponential, not linear. There are a
 great many examples of this, including exponential trends in computation,
 communication, brain scanning, miniaturization and multiple aspects of
 biotechnology. One can examine this data in many different ways, on many
 different time scales and for a wide variety of different phenomena, and
 we find (at least) double exponential growth, a phenomenon I call the
 "law of accelerating returns." The law of accelerating returns does not
 rely on an assumption of the continuation of Moore's law, but is based on
 a rich model of diverse technological processes. What it clearly shows is
 that technology, particularly the pace of technological change, advances
 (at least) exponentially, not linearly, and has been doing so since the
 advent of technology. That is why people tend to overestimate what can be
 achieved in the short term (because we tend to leave out necessary
 details) but underestimate what can be achieved in the long term (because
 exponential growth is ignored).

 Michael's argument that we cannot always anticipate the effects of a
 particular technology is irrelevant here. These exponential trends in
 computation and communication technologies are greatly empowering the
 individual. Of course, that's good news in many ways. These trends are
 behind the pervasive trend we see towards democratization, and are
 reshaping power relations at all levels of society. But these
 technologies are also empowering and amplifying our destructive impulses.
 It's not necessary to anticipate all of the ultimate uses of a technology
 to see that there is danger in, for example, every college biotechnology
 lab having the ability to create self-replicating biological pathogens.

 However, I do reject Joy's call for relinquishment of broad areas of
 technology (such as nanotechnology) despite my not sharing Michael's
 skepticism on the feasibility of these technologies. Technology has
 always been a double-edged sword. We don't need to look any further than
 today's technology to see this. If we imagine describing the dangers that
 exist today (enough nuclear explosive power to destroy all mammalian
 life, just for starters) to people who lived a couple of hundred years
 ago, they would think it mad to take such risks. On the other hand, how
 many people in the year 2001 would really want to go back to the short,
 brutish, disease-filled, poverty-stricken, disaster-prone lives that 99
 percent of the human race struggled through a couple of centuries ago?

 People often go through three stages in examining the impact of future
 technology: awe and wonderment at its potential to overcome age-old
 problems, then a sense of dread at a new set of grave dangers that
 accompany these new technologies, followed, finally and hopefully, by the
 realization that the only viable and responsible path is to set a careful
 course that can realize the promise while managing the peril.

 The continued opportunity to alleviate human distress is one important
 motivation for continuing technological advancement. Also compelling are
 the already apparent economic gains, which will continue to hasten in the
 decades ahead. There is an insistent economic imperative to continue
 technological progress: relinquishing technological advancement would be
 economic suicide for individuals, companies and nations.

 Which brings us to the issue of relinquishment, which is Bill Joy's most
 controversial recommendation and personal commitment. Forgoing fields
 such as nanotechnology is untenable. Nanotechnology is simply the
 inevitable end result of a persistent trend toward miniaturization that
 pervades all of technology. It is far from a single centralized effort
 but is being pursued by a myriad of projects with many diverse goals.

 Furthermore, abandonment of broad areas of technology will only push them
 underground, where development would continue unimpeded by ethics and
 regulation. In such a situation, it would be the less stable, less
 responsible practitioners (for example, the terrorists) who would have
 all the expertise.

 Technology will remain a double-edged sword, and the story of the 21st
 century has not yet been written. It represents vast power to be used for
 all humankind's purposes. We have no choice but to work hard to apply
 these quickening technologies to advance our human values, despite what
 often appears to be a lack of consensus on what those values should be.

 MICHAEL DERTOUZOS

 In my column, I observed that we have been incapable of judging where
 technologies are headed, hence we should not relinquish a new technology,
 based strictly on reason. Ray agrees with my conclusion, but for a
 different reason: He sees technology growing exponentially, thereby
 offering us the opportunity to alleviate human distress and hasten future
 economic gains. From his perspective, my point is "irrelevant," and my
 views on the future of technology are "skeptical." Let's punch through to
 the underlying issues, which are vital, for they point at a fundamental
 and all-too-often ignored relationship between technology and humanity.

 Technologies have undergone dramatic change in the last few centuries.
 But people's basic needs for food, shelter, nurturing, procreation and
 survival have not changed in thousands of years. Nor has the rapid growth
 of technology altered love, hate, spirituality or the building and
 destruction of human relationships. Granted, when we are in the frying
 pan, surrounded by the sizzling oil of rapidly changing technologies, we
 feel that everything around us is accelerating. But, from the longer
 range perspective of human history and evolution, change is far more
 gradual. The novelty of our modern tools is counterbalanced by the
 constancy of our ancient needs.

 Our humanity meets technology in other ways, too: In forecasting the
 future of technology, Ray laments that most people use "linear thinking"
 that builds on existing patterns, thereby missing the big "nonlinear"
 ideas that are the true drivers of change. Once again, this is only half
 the story: In the last three decades, as I witnessed the new ideas and
 the 50-some startups that arose from the MIT Laboratory for Computer
 Science, I observed a pattern: Every successful technological innovation
 is the result of two simultaneous forces -- a controlled insanity
 needed to break away from the stranglehold of current reason and ideas,
 and a disciplined assessment of potential human utility, to filter out
 the truly absurd. Focusing only on the wild part is not enough: Without a
 check, it often leads to exhibitionistic thinking, calculated to shock.
 Wild ideas can be great. But I draw a hard line when such ideas are
 paraded in front of a lay population as inevitable, or even likely.

 That is the case with much of the futurology in today's media, because of
 the high value we all place on entertainment. With all the talk about
 intelligent agents, most people think they can go buy them in the corner
 drugstore. Ray, too, brings up his experience with speech translation to
 demonstrate computer intelligence. The Lab for Computer Science is
 delightfully full of Victor Zue's celebrated systems that can understand
 spoken English, Spanish and Mandarin, as long as the context is
 restricted, for example to let you ask about the weather, or to book an
 airline flight. Does that make them intelligent? No. Conventionally,
 "intelligence" is centered on our ability to reason, even imperfectly,
 using common sense. If we dub as intelligent, often for marketing or
 wishful-thinking purposes, every technological advance that mimics a tiny
 corner of human behavior, we will be distorting our language and
 exaggerating the virtues of our technology. We have no basis today to
 assert that machine intelligence will or will not be achieved. Stating
 that it will go one way or the other is to assert a belief, which is
 fine, as long as we say so. Does this mean that machine intelligence will
 never be achieved? Certainly not. Does it mean that it will be achieved?
 Certainly not. All it means is that we don't know -- an exciting
 proposition that motivates us to go find out.

 Attention-seizing, outlandish ideas are easy and fun to concoct. Far more
 difficult is to pick future directions that are likely. My preferred way
 for doing this, which has served me well, though not flawlessly, for the
 last 30 years, is this: Put in a salad bowl the wildest, most
 forward-thinking technological ideas that you can imagine. (This is the
 craziness part.) Then add your best sense of what will be useful to
 people. (That's the rational part.) Start mixing the salad. If you are
 lucky, something will pop up that begins to qualify on both counts. Grab
 it and run with it, since the best way to forecast the future is to build
 it. This forecasting approach combines "nonlinear" ideas with the
 "linear" notion of human utility, and with a hopeful dab of serendipity.

 Ray observes that technology is a double-edged sword. I agree, but I
 prefer to think of it as an axe that can be used to build a house or chop
 the head off an adversary, depending on intentions. The good news is that
 since the angels and the devils are inside us, rather than within the
 axe, the ratio of good to evil uses of a technology is the same as the
 ratio of good to evil people who use that technology... which stays
 pretty constant through the ages. Technological progress will not
 automatically cause us to be engulfed by evil, as some people fear.

 But for the same reason, potentially harmful uses of technology will
 always be near us, and we will need to deal with them. I agree with Ray's
 suggestions that we do so via ethical guidelines, regulatory overviews,
 immune response and computer-assisted surveillance. These, however, are
 partial remedies, rooted in reason, which has repeatedly let us down in
 assessing future technological directions. We need to go further.

 As human beings, we have a rational, logical dimension, but also a
 physical, an emotional and a spiritual one. We are not fully human unless
 we exercise all of these capabilities in concert, as we have done
 throughout the millennia. To rely entirely on reason is to ascribe
 omniscience to a few ounces of meat, tucked inside the skull bones of
 antlike creatures roaming a small corner of an infinite
 universe -- hardly a rational proposition! To live in this increasingly
 complex, awesome and marvelous world that surrounds us, which we barely
 understand, we need to marshal everything we've got that makes us human.

 This brings us back to the point of my column, which is also the main
 theme of this discussion: When we marvel at the exponential growth of an
 emerging technology, we must keep in mind the constancy of the human
 beings who will use it. When we forecast a likely future direction, we
 need to balance the excitement of imaginative "nonlinear" ideas with
 their potential human utility. And when we are trying to cope with the
 potential harm of a new technology, we should use all our human
 capabilities to form our judgment.

 To render technology useful, we must blend it with humanity. This process
 will serve us best if, alongside our most promising technologies, we
 bring our full humanity, augmenting our rational powers with our
 feelings, our actions and our faith. We cannot do this by reason alone!

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: majordomo@bbs.thing.net and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net