www.nettime.org
Nettime mailing list archives

<nettime> Jaron Lanier's acceptance speech of the German Booksellers' Pe
Patrice Riemens on Mon, 13 Oct 2014 19:15:28 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Jaron Lanier's acceptance speech of the German Booksellers' Peace Prize


Original to: http://www.friedenspreis-des-deutschen-buchhandels.de/819335/
bwo Barbara Strebel


Jaron Lanier

?High tech peace will need a new kind of humanism?

This storied award cannot be given just to me. I can only accept it on
behalf of the global community of digital activists and idealists, even
though many of us disagree profoundly with each other. I also accept this
award in honor of the life of the late Frank Schirrmacher, who was a
fountain of light in our times. He will be terribly missed.


Even though I?d like to give a talk that is mostly positive and inspiring,
in order to be a realist I must sometimes be a little dark. When one
trusts in realism enough, one can burn through the indulgences of
darkness. It often turns out that there is light waiting on the other
side.

Ours is a confusing time. In the developed world we have enjoyed affluence
for long enough to have a hard time appreciating it. We especially love
our gadgets, where we can still find novelty - but we also have strong
evidence that we would be peering over the edge of a precipice if we
opened our eyes more often.

It pains me to intone the familiar list of contemporary perils: Climate
change first of all; population and depopulation spirals utterly out of
sync with our societies; our inability to plan for the decline of cheap
fossil fuels; seemingly inescapable waves of austerity; untenable trends
of wealth concentration; the rise of violent extremisms in so many ways in
so many places? Of course all of these processes are intertwined with one
another.

Given this big picture, it certainly came as a surprise to many of us (to
me most of all) that this year?s Peace Prize of the German Book Trade was
given to a figure such as myself who is associated with the rise of
digital technologies. Aren?t digital toys just a flimsy froth that
decorates big dark waves?

Digital designs have certainly brought about noisy changes to our culture
and politics.

Let?s start with some good news. We have gotten a first peek at what a
digitally efficient society might be like, and despite the ridiculousness
of the surveillance economy we seem to have chosen so far, we must not
forget that there?s a lot to like about what we have seen.

Waste can be systemically reduced, it turns out, just when we must become
more efficient to combat climate change. For instance, we have learned
that solar power performs better than many suspected it would, though it
must be combined with a smart grid to be enjoyed with reliability. This is
just the sort of positive option that my colleagues and I had hoped might
come about through digital networking.

But the practical hopes for digital networks have also been accompanied by
a symbolic, almost metaphysical project. Digital technology has come to
bear the burden of being the primary channel for optimism in our times.
This, after so many Gods have failed. What an odd fate for what started
out as a rather sterile corner of mathematics!

Digital cultural optimism is not insane. We have seen new patterns of
creativity and perhaps have even found a few new tendrils of empathy
transcending what used to be barriers of distance and cultural difference.
This sort of pleasure has perhaps been over-celebrated by now, but it is
real. For a trivial but personal example, how lovely that I now am in
touch with oud players around the world, that I can rehearse a concert
over the ?net. It really is great fun.

I just mentioned some of the good stuff, but we have also famously used
digital toys to acquiesce to cheap and casual mass spying and
manipulation; we have created a new kind of ultra-elite, supremely wealthy
and untouchable class of technologists; and all too often we now settle
into a frenzy of digitally efficient hyper-narcissism.

I still enjoy technology so much that I can hardly express it. Virtual
Reality can be fun and beautiful. And yet here I am, so critical. To avoid
contradictions and ambiguities is to avoid reality.

It is a question pondered by online commentators many thousands of times a
day. To render opinions on Internet culture can seem as useless as
dripping water from an eyedropper onto a sidewalk in a rainstorm. Anyone
who speaks online knows what it?s like these days. You either huddle with
those who agree, or else your opinion is instantly blended into grey mush
by violent blades.

Thesis and antithesis, one hand and the other, no longer lead to a higher
synthesis in the online world. Hegel has been beheaded. Instead there are
only statistical waves of data, endlessly swirled into astonishing
fortunes by those who use it to calculate economic advantages for
themselves.

The Peace Prize of the German Book Trade is associated with books, so in
this era of digital takeover we must ask, ?What is a book??

The Internet is used to comment on the Internet as much as it is used for
pornography or cat pictures, but it is really only media external to the
Internet ? books in particular - that can provide perspective or
syntheses. That is one reason the Internet must not become the sole
platform of communication. It serves us best when it isn?t both subject
and object.

Thus a creature of digital culture such as myself writes books when it is
time to look at the big picture. There is a chance that a reader will read
a whole book. There is at least an extended moment that I and a reader
might share.

If a book is only a type of manufactured object made of paper, then it can
only be celebrated in the way we might celebrate clarinets or beer. We
love these things, but they are only particular designs, evolved products
with their own trade fairs and sub-cultures.

A book is something far more profound. It is a statement of a particular
balance between individual personhood and human continuity. Each book has
an author, someone who took a risk and made a commitment, saying, ?I have
spent a substantial slice of my short life to convey a definite story and
a point of view, and I am asking you to do the same to read my book: Can I
earn such a huge commitment from you?? A book is a station, not the
tracks.

Books are a high stakes game, perhaps not in terms of money (compared with
other industries), but in terms of effort, commitment, attention, the
allocation of our short human lives, and our potential to influence the
future in a positive way. Being an author forces one into a humanizing
form of vulnerability. The book is an architecture of human dignity.

A book in its very essence asserts that individual experience is central
to meaning, for each book is distinct. Paper books are by their nature not
mushed together into one collective, universal book. We have come to think
it is normal for there to be a single Wikipedia article about a humanities
topic for which there really can?t be only one optimized telling; most
topics are not like math theorems.

In the print era there were multiple encyclopedias, each announcing a
point of view, and yet in the digital era there is effectively only one.
Why should that be so? It is not a technical inevitability, despite
?network effects.? It is a decision based on unquestioned but shoddy dogma
that ideas in themselves ought to be coupled to network effects. (It is
sometimes said that the Wikipedia will become the memory for a global
artificial intelligence, for instance.)

Books are changing. Some of the metamorphosis is creative and fascinating.
I am charmed by the thought of books that will someday synchronize to
virtual worlds, and by other weird ideas.

But too much of the metamorphosis is creepy. You must now, suddenly,
subject yourself to surveillance in order to read an eBook. What a
peculiar deal we have made! In the past we struggled to save books from
the flames, but now books have been encumbered with duties to report your
reading conduct to an opaque network of high tech offices that analyze and
manipulate you. Is it better for a book to be a spying device or ashes?

Books have always helped us undo problems we bring upon ourselves. Now we
must save ourselves by noticing the problems we are forcing upon books.

Beyond books, a ?peace prize? is obviously associated with peace, but what
do we mean by peace?

Certainly peace must mean that violence and terror are not used to gain
power or influence, but beyond that, peace must also have a creative
character.

Most of us do not want to accept some sort of static or dull existence,
even if it is free of violence. We do not want to accept the peaceful
order that authoritarian or imposed solutions claim to offer, whether
digital or old fashioned. Nor should we expect that future generations
will accept our particular vision of a sustainable society forever, no
matter how smart we are or how good our intentions might be.

So peace is a puzzle. How can we be free and yet not veer into the freedom
to be nasty? How can peace be both capricious and sustainable?

The resolutions between freedom and stability that we have come to know
have tended to rely on bribery - on ever-increasing consumption - but that
doesn?t appear to be a long term option.

Maybe we could stabilize society with virtual rewards, or at least that?s
an idea one hears around Silicon Valley quite often. Get people to reduce
their carbon footprints by wooing them with virtual trinkets within video
games. It might work at first, but there?s a phony and patronizing quality
to that approach.

I don?t believe we know everything we need to know yet about solutions to
the long term puzzle of peace. That might sound like a negative comment on
first hearing, but it is actually an overtly optimistic statement; I
believe we are learning more and more about peace as we go.

My darkest digital fear concerns what I call the ?pack switch.? This is a
thesis about a persistent aspect of human character that is opposed to
peace.

People are like wolves, according to this theory; we are members of a
species that can function either as individuals or in packs. There is a
switch inside us. We are prone to suddenly fall into pack thinking without
even realizing it.

If there is one thing that terrifies me about the Internet, this is it.
Here we have a medium which can elicit ?flash mobs? and routinely creates
sudden ?viral? popularities. So far, these effects have not been evil on
an epochal level, but what is there to prevent that? When generations grow
up largely organized and mediated by global corporate cyber-structures
like proprietary social networks, how can we know who will inherit control
of those designs?

Traditional definitions of ?peace? are often only of peace within the pack
or clan, so clannishness might be the most pernicious of our sins. It
undermines us at our core.

Hive identity is almost universally perceived as a virtue. The Book of
Proverbs in the Old Testament lists a set of sins, including lying,
murder, pride, and so on, but also ?sowing discord among brethren.?
Similar injunctions exist in every culture, political system, or religion
I have studied. I do not bring this up to suggest an equivalency between
all cultures or creeds, but rather a common danger within us, in our
nature, that we all face and must learn to deflect. Becoming a loyal part
of a pack is confused with goodness again and again, even ? especially! -
when the people fancy themselves to be rebels. It is always pack against
pack.

It is as true for those who identify with pop styles or a particular
approach to digital politics, as it can be for traditional ethnicity,
nationality, or religion. Within digital culture, one can be vilified for
not adhering strictly enough to the dogma of the ?open? movement, for
instance.

Again and again, our crude ?sins? like greed or pack identity obsession
emerge rudely but stealthily from our carefully cultivated patterns of
perfect thinking ? in fact, just when we think we?re close to technical
perfection.

The lovely idea of human rights is being confounded by gamesmanship during
our present algorithmic era. After generations of thinkers and activists
focused on human rights, what happened? Corporations became people, or so
said the Supreme Court in the United States! A human right is an absolute
benefit, so sneaky players will connive to calculate multiples of that
benefit for themselves and their pack-mates. What are we to do with our
idea of human rights in America? It's been inverted.

For another example, it is just when digital companies believe they are
doing the most good, optimizing the world, that they suddenly find
themselves operating massive spying and behavior modification empires.
Consider Facebook, which is the first large public company controlled by a
single individual, who is mortal. It governs much of the pattern of social
connection in the world today. Who might inherit this power? Is there not
a new kind of peril implicit in that quandary?

Of course this topic has special resonance in Germany. I would like to say
something profound about that angle, but honestly I don?t fully understand
what happened. My mother was from Vienna, and many of her relatives were
lost to the evil and the shiny mega-violence of the Nazi regime. She
suffered horribly as a young girl, and almost perished as well. Were I not
so close to those events, were the impact more muted for me, I might be
more ready to pretend that I understand them more fully, as so many
scholars pretend to do.

In all honesty I still find it terribly hard to understand the Nazi era,
despite much reading. At the very least, the Nazis certainly proved that a
highly technical and modern sensibility is not an antidote to evil. In
that sense, the Nazi period heightens my concerns about whether the
Internet could serve as a superior platform for sudden mass pack/clan
violence.

I don?t think outright repudiation of pack/clan identity is the best way
to avoid falling into the associated violence. People seem to need it.
Countries more often than not resist losing identity in larger
confederations. Very few people are ready to live as global citizens, free
of national association. There?s something abstract and unreal about that
sort of attempt to perfect human character.

The best strategy might be for each individual to belong to enough varied
clans that it becomes too confusing to form coherent groups in opposition
to one another. Back in the digital beginning, decades ago, I held out
exactly this hope for digital networks. If each person could feel a sense
of clan membership in a confusing variety of ?teams? in a more connected
world, maybe the situation would become a little too tangled for
traditional rivalries to escalate.

This is also why I worry about the way social networks have evolved to
corral people into groups to be well-targeted for what is called
advertising these days, but is really more like the micromanagement of the
most easily available options, through link placement.

I always feel the world becomes a slightly better place when I meet
someone who has ties to multiple sports teams and can?t decide which one
to cheer at a game. Such a person is still enthused, but also confused:
suddenly an individual and not part of a pack. The switch is reset.

That kind of reset is interesting because it is a change in outlook
brought about by circumstances instead of the expression of ideas, and
that type of influence is exactly what happens with technology all the
time.

In the past an idea in a book might have been persuasive or seductive, or
might in some cases have been forced into belief and practice by the means
of a gun or a sword held near. Today, however, ideas are often implicit in
the computer code we use to run our lives.

Privacy is an example. Whatever one thinks about privacy, it?s the code
running in faraway cloud computers that determines what ideas about
privacy are actually in effect.

The concept of privacy is multifaceted, widely varying, and always hard to
define, and yet the code which creates or destroys privacy is tediously ?
banally ? concrete and pervasive. Privacy is hardly a personal decision
anymore, which means it?s no longer even something that can be thought
about in the old sense. Only fanatical scholastics waste time on moot
questions.

The only useful thinking about privacy is that thinking which leads to
changes in the code. And yet we?ve mostly ?outsourced? our politics to
remote corporations, so there is often no clear channel between thinking
and coding, meaning between thinking and social reality. Programmers have
created a culture in which they expect to outrun regulators.

We ask governments to tip toe into the bizarre process of attempting to
regulate how cloud-based corporations channel our communications and
coordinated activities with one another. But then programmers will
sometimes contravene whatever the company has been forced to do, rendering
government action into an absurdity. We have seen this pattern with
copyright, for instance, but also in different ways with issues like the
right to be forgotten or in certain arenas of privacy, particularly for
women online. (Current architectures and practices favor anonymous
harassers over the women they harass.)

In each case, many of the most creative and sympathetic activists don?t
want people to be able to contravene the ?openness? of the network. But at
the same time many digital activists have a seemingly infinite tolerance
for gargantuan inequities in how people benefit from that all-seeing eye.

For instance, big data fuels the algorithmic concentration of wealth. It
happened first in music and finance, but is spreading to every other
theater of human activity. The algorithms don?t create sure bets, but they
do gradually force the larger society to take on the risks associated with
profits that benefit only the few. This in turn induces austerity. Since
austerity is coupled with a sharing economy (because certain kinds of
sharing provides the data that run the scheme), everyone but the tiny
minority on top of the computing clouds experiences a gradual loss of
security.

This, in my view, is the primary negative consequence that has occurred
thus far through network technology. To observe that is not to dismiss
another problem which has gained much more attention, because it is
sensational. A side effect of the rise of the algorithmic surveillance
economy is the compelled leakage of all that data into the computers of
national intelligence services. We know much more about this than we would
have because of Edward Snowden?s revelations.

Curbing government surveillance is essential to the future of democracy,
but activists need to keep in mind that in the big picture what is going
on at the moment is a gradual weakening of governments in favor of the
businesses that gather the data in first place, through the mechanisms of
wealth disparity and austerity. That is only true for democracies, of
course; non-democratic regimes take control of their own clouds, as we
see, for instance, in China.

I do sometimes wonder if we?ve outsourced our democracies to the tech
companies simply in order to not have to face it all. We deflect our own
power and responsibility.

Here I feel compelled to foresee a potential misunderstanding. I am not
?anti-corporate.? I like big corporations, and big tech corporations in
particular. My friends and I sold a startup to Google and I currently have
a research post in Microsoft?s labs. We must not put each other through
purity tests, as if we were cloud algorithms classifying one another for
targeted ads.

The various institutions that people invent need not annihilate each
other, but can balance each other. We can learn to be ?loyal opposition?
within all the institutions we might support or at least tolerate, whether
government, business, religion, or anything else. We don?t always need to
destroy in order to create. We can and ought to live with a tangle of
allegiances. That is how to avoid the clan/hive switch.

Learning to think beyond opposition can yield clarity. For instance, I
disagree equally with those who favor a flat distribution of economic
benefits and those who prefer the winner-take-all outcomes that the high
tech economy has been yielding lately. The economy need not look like
either a tower overlooking a sea of foolish pretenders, or a salt flat
where everyone is forced to be the same by some controlling authority.

One can instead prefer a dominant middle block in an economy. An honest
measurement of anything in reality ought to yield a bell curve. If an
economy yields a bell curve of outcomes, not only is it honest, but it is
also stable and democratic, for then power is broadly distributed. The
focus of economic justice should not be to condemn rich people in
principle, but to condemn a basin in the middle of the distribution.

The conflict between the Left and Right has been so acute for so long that
we don?t even have an honest vocabulary to describe the honest mathematics
of the bell curve. We can?t speak of a ?middle class? because the term has
become so fraught. And yet that impossible-to-articulate middle is the
heart of moderation where we must seek peace.

As boring as it might seem to be at first, moderation is actually both the
most fascinating and promising path forward. We are constantly presented
with contrasts between old and new, and we are asked to choose. Should we
support old-fashioned taxis and their old-fashioned benefits for drivers
or new types of services like Uber that offer digital efficiencies?

These choices are false choices! The only ethical option is to demand a
synthesis of the best of pre-digital and digital designs.

One of the problems is that technologists are often trapped in old
supernatural fantasies that prevent us from being honest about our own
work. Once upon a time, scientists imagined coming up with the magic
formulas to make machines come alive and become self-sufficient. After
that, artificial intelligence algorithms would write the books, mine the
fuels, manufacture the gadgets, care for the sick and drive the trucks.
That would lead to a crisis of unemployment, perhaps, but society would
adjust, perhaps with a turn towards socialism or a basic income model.

But the plan never worked out. Instead, what looks like automation is
actually driven by big data. The biggest computers in the world gather
data from what real people ? like authors - do, acting as the most
comprehensive spying services in history, and that data is rehashed to run
the machines.

It turns out that ?automation? still needs huge numbers of people! And yet
the fantasy of a machine-centric future requires that those real people be
rendered anonymous and forgotten. It is a trend that reduces the meaning
of authorship, but as a matter of course will also shrink the economy as a
whole, while enriching those who own the biggest spying computers.

In order to create the appearance of automatic language translations, for
instance, the works of real translators must be scanned by the millions
every single day (because of references to current events and the like.)
This is a typical arrangement.

It?s usually the case that an appearance of automation is actually hiding
the disenfranchisement of the people behind the curtain who do the work,
which in turn contributes to austerity, which in turn rules out the
possibility of socialism or basic income as a way to compensate for all
the theatrically simulated unemployment. The whole cycle is a cosmic scale
example of smart people behaving stupidly.

?Disrupt? might be the most common word in digital business and culture.
We pretend it?s hard to differentiate ?creative destruction? ? a most
popular trope in modern business literature - from mere destruction.

It really isn?t that hard. Just look to see if people are losing security
and benefits even though what they do is still needed. Buggy whips are
obsolete, but the kinds of services being made more efficient by digital
services lately are usually just being reformatted, not rejected.

Whenever someone introduces a cloud service to make some aspect of life
easier, like access to music, rides, dates, loans, or anything else, it
also now expected that innocent people will suffer, even if that is not
strictly, technically necessary. People will be cut off from social
protections.

If artists enjoyed copyright, that will be lost in the new system. If
workers were in a union, they will no longer be. If drivers had special
licenses and contracts, they no longer will. If citizens enjoyed privacy,
then they must adjust to the new order.

The familiar expectation that one must incinerate old rights, like
privacy, or security through the labor movement, in order to introduce new
technological efficiencies, is bizarre. Techie idealists often focus on
how the old protections were imperfect, unfair, and corrupt ? all of which
was often so ? but we rarely admit to ourselves how the new situation
offers spectacularly inferior protections and astoundingly greater levels
of unfairness.

If you are a technology creator, please consider this: If you need to rely
on dignity destruction as a crutch in order to demonstrate a new
efficiency through digital networking, it only means you?re not good at
the technology. You are cheating. Really efficient technological designs
should improve both service and dignity for people at the same time.

We humans are geniuses at confusing ourselves by using computers. The most
important example is the way computation can make statistics seem to be an
adequate description of reality. This might sound like an obscure
technical problem, but it is actually at the core of our era?s economic
and social challenges.

There is an exponentially increasing number of observations about how
gigantic ?big data? is these days; about the multitudes of sensors hiding
in our environment, or how vast the cloud computing facilities have
become, in their obscure locations, desperate to throw off their excess
heat into wild rivers.

What is done with all that data? Statistical algorithms analyze it!

If you would, please raise the tip of your finger and move it slowly
through the air. Given how many cameras there are in our present-day
world, some camera is probably looking at it, and some algorithm somewhere
is probably automatically predicting where it will be in another moment.
The algorithm might have been set in place by a government intelligence
operation, a bank, a criminal gang, a Silicon Valley company, who knows?
It is ever-cheaper to do it and everyone who can, does.

That algorithm will probably be correct for at least a little while. This
is true simply because statistics is a valid branch of mathematics.

But beyond that, the particular reality we find ourselves in is friendly
to statistics. This is a subtle aspect of our reality. Our world, at least
at the level in which humans function, has an airy, spacious quality. The
nature of our environment is that most things have enough room to continue
on in what they were just doing. For contrast, Newton?s laws (i.e. a thing
in motion will continue) do not apply in a common tile puzzle, because
every move is so constrained and tricky in such a puzzle.

But despite the apparent airiness of everyday events, our world is still
fundamentally like a tile puzzle. It is a world of structure, governed by
conservation and exclusion principles. What that means is simple: My
finger will probably keep on moving as it was, but not forever, because it
will reach the limit of how far my arm can extend, or it will run into a
wall or some other obstacle.

This is the peculiar, flavorful nature of our world: commonplace
statistical predictability, but only for limited stretches of time, and we
can?t predict those limits universally. So cloud-based statistics often
work at first, but then fail.

We think we can use computers to see into the future, but then suddenly
our schemes fail. (Good scientists who work with theory, beyond
statistics, understand this problem and also model the wall that
interrupts the progress of your finger. That level of effort is rarely
expended in cloud business, however, since billions are still made without
it.)

This is the universal and seductive pattern of intellectual failure in our
times. Why are we so easily seduced? It is hard to describe how intense
the seductive quality is to someone who hasn?t experienced it.

If you?re a financier running cloud statistics algorithms, it feels at
first like you have the magic touch of King Midas. You just sit back and
your fortune accumulates. But then something happens. You might run out of
people to offer stupid loans to, or your competitors start using similar
algorithms, or something.

Some structural limit interrupts your amazing run of perfect luck, and you
are always shocked, shocked, shocked, even if it has happened before,
because the seductive power of those early phases is irresistible. (A
baseball team where I live in California was celebrated in the book and
movie ?Moneyball? for using statistics to become winners, and yet now they
are losing. This is utterly typical.)

There is also an intense power-trip involved. You can not only predict,
but you can force patterns into the ways users express themselves, and how
they act.

It is common these days for a digital company to woo some users into a
service that provides a new efficiency through algorithms and cloud
connectivity. This might be a way of distributing books to tablets, a way
of ordering rides in cars or finding places to sleep while travelling, a
way of keeping track of family members and friends, of finding partners
for sex and romance, or a way of finding loans.

Whatever it is, a phenomenon called ?network effect? soon takes hold, and
after that, instead of a world of choices, people are for the most part
compelled to use whichever service has outrun the others. A new kind of
monopoly comes into being, often in the form of a California-based
company.

The users will typically feel like they are getting tremendous bargains.
Free music! They seem to be unable to draw a connection to their own
lessening prospects. Instead they are grateful. If you tell them, through
the design of algorithms, how to date, or how to present themselves to
their families, they will comply.

Whoever runs one of these operations, which I call Siren Servers, can set
the norms for society, such as privacy. It is like being king.

That is the raw economic snapshot that characterizes so many aspects of
our society in recent times. It was the story of music early on. Soon it
will be the story of manufacturing (because of 3D printers and factory
automation), health care (because of robotic nurses), and every other
segment of the economy.

And of course it has overtaken the very idea of elections in the United
States, where computational gerrymandering and targeted advertising have
made elections into contests between big computers instead of contests
between candidates. (Please don?t let that happen in Europe.)

It works over and over and yet it also fails over and over in another
sense. Automated trading crashes spectacularly, and then starts up again.
Recorded music crashes, but then the same rulebook is applied to books.
Billons are accumulated around the biggest computers with each cycle. The
selfish illusion of infallibility appears over and over again - the serial
trickster of our era ? and makes our smartest and kindest technical minds
become part of the problem instead of part of the solution. We make
billions just before we slam into the wall.

If this pattern is inevitable, then politics don?t matter much. Politics,
in that case, could at most delay a predetermined unraveling.

But what if politics can actually matter? In that case, it is sad that
current digital politics is so often self-defeating. The mainstream of
digital politics, which is still perceived as young and ?radical,?
continues to plow forward with a set of ideas about openness from over
three decades ago, even though the particular formulation has clearly
backfired.

As my friends and I watched the so-called Twitter or Facebook revolution
unfold in Tahrir Square from the comfort of Silicon Valley, I remember
saying, ?Twitter will not provide jobs for those brave, bright young
Egyptians, so this movement can?t succeed.? Freedom isolated from
economics (in the broad sense of the word) is meaningless.

It is hard to speak of this, because one must immediately anticipate so
many objections. One can be convinced, for instance, that traditional
social constructions like ?jobs? or ?money? can and should be made
obsolete through digital networks, but: Any replacement inventions would
need to offer some of the same benefits, which young people often prefer
to not think about. But one cannot enter into only part of the circle of
life.

This is a tricky topic and deserves a careful explanation. The ?sharing
economy? offers only the real time benefits of informal economies that
were previously only found in the developing world, particularly in slums.
Now we?ve imported them into the developed world, and young people love
them, because the emotion of sharing is so lovely.

But people can?t stay young forever. Sometimes people get sick, or need to
care for children, partners, or parents. We can?t ?sing for our supper?
for every meal. Because of this reality, the sharing economy has to be
understood ultimately as a deceptive ritual of death denial. Biological
realism is the core reason formal economies came into being in the first
place. If we undermine both union protections, through the sharing
economy, and trap governments in long term patterns of austerity and debt
crisis, through that same economy, who will take care of the needy?

Sometimes I wonder if younger people in the developed world, facing the
inevitable onslaught of aging demographics, are subconsciously using the
shift to digital technology as way to avoid being crushed by obligations
to an excess of elders. Most parts of the developed world are facing this
type of inverted demographic cataclysm in the coming decades. Maybe it?s
proper for young people to seek shelter, but if so, the problem is that
they too will become old and needy someday, for that is the human
condition.

Within the tiny elite of billionaires who run the cloud computers, there
is a loud, confident belief that technology will make them immortal.
Google has funded a large organization to ?solve death,? for instance.
There are many other examples.

I know many of the principal figures in the anti-death, or post-human
movement, which sits at the core of Silicon Valley culture, and I view
most of them as living in a dream world divorced from rational science.
(There are also some fine scientists who simply accept the funding;
funding for science these days often comes from oddly-motivated sources,
so I cannot fault them.)

The arithmetic is clear. If immortality technology, or at least dramatic
life extension technology, starts to work, it would either have to be
restricted to the tiniest elite, or else we would have to stop adding
children to the world and enter into an infinitely stale gerontocracy. I
point this out only to reinforce that when it comes to digital technology,
what seems radical ? what at first seems to be creative destruction - is
often actually hyper-conservative and infinitely stale and boring once it
has a chance to play out.

Another popular formulation would have our brains ?uploaded? into virtual
reality so that we could live forever in software form. This despite the
fact that we don?t know how brains work. We don?t yet know how ideas are
represented in neurons. We allocate billions of dollars on simulating
brains even though we don?t really know the basic principles as yet. We
are treating hopes and beliefs as if they were established science. We are
treating computers as religious objects.

We need to consider whether fantasies of machine grace are worth
maintaining. In resisting the fantasies of artificial intelligence, we can
see a new formulation of an old idea that has taken many forms in the
past: ?Humanism.?

The new humanism is a belief in people, as before, but specifically in the
form of a rejection of artificial intelligence. This doesn?t mean
rejecting any particular algorithm or robotic mechanism. Every single
purported artificially intelligent algorithm can be equally
well-understood as a non-autonomous function that people can use as a
tool.

The rejection is not based on the irrelevant argument usually put forward
about what computers can do or not do, but instead on how people are
always needed to perceive the computer in order for it to be real. Yes, an
algorithm with cloud big data gathered from millions, millions of people,
can perform a task. You can see the shallowness of computers on a
practical level, because of the dependency on a hidden crowd of anonymous
people, or a deeper epistemological one: Without people, computers are
just space heaters making patterns.

One need not specify whether a divine element is present in a person or
not, nor precisely whether certain ?edge cases? like bonobos should be
considered human beings. Nor must one make absolute judgments about the
ultimate nature of people or computers. One must, however, treat computers
as less-than-human.

To talk about specific ways out of our stupid digital economics pattern is
to enter into a difficult argument. I have mostly explored and advocated
one approach, which is to revive the original concept for digital media
architecture, dating back to Ted Nelson?s work in the 1960s.

Ted suggested a universal micropayment scheme for digital contributions
from people. Once again, this was not a radical reaction, but the
historical starting point for all digital media investigations.

I have looked into extending Ted?s idea in order to support the way
people?s lives are presently read into big data schemes. For instance, as
I pointed out earlier, free language translation services actually depend
on scanning the work of millions of real human translators every day. Why
not pay those real people? It would be fair and truthful.

If we just admitted that people are still needed in order for big data to
exist, and if we were willing to lessen our fantasies of artificial
intelligence, then we might enjoy a new economic pattern in which the bell
curve would begin to appear in digital economic outcomes, instead of
winner-take-all results. That might result in sustainable societies that
don?t fall prey to austerity, no matter how good or seemingly ?automated?
technology gets.

This idea is controversial, to say the least, and I can?t argue it fully
in this short statement. It is only an idea to be tested, at any rate, and
might very well turn out to be untenable.

But the key point, the essential position from which we must not
compromise, is to recognize that there is a space of alternatives. The
pattern we see today is not the only possible pattern, and is not
inevitable.

Inevitability is an illusion that leeches freedom away.

The more advanced technology gets, the harder it will be to distinguish
between algorithms and corporations. Which is Google today, or Facebook?
The distinction is already esoteric in those cases and soon will be for
many more corporations. If algorithms can be people, then so will be
corporations, as they already are in the USA. What I declare here today is
that neither an algorithm nor a corporation should be a person!

The new humanism asserts that it is ok to believe that people are special,
in the sense that people are something more than machines or algorithms.
This proposition can lead to crude mocking arguments in tech circles, and
really there?s no absolute way to prove it?s correct.

We believe in ourselves and each other only on faith. It is a more
pragmatic faith than the traditional belief in God. It leads to a fairer
and more sustainable economy, and better, more accountable technology
designs, for instance. (Believing in people is compatible with any belief
or lack of belief in God.)

To some techies, a belief in the specialness of people can sound
sentimental or religious, and they hate that. But without believing in
human specialness, how can a humanistic society be sought?

May I suggest that technologists at least try to pretend to believe in
human specialness to see how it feels?

*

To conclude, I must dedicate this talk to my father, who passed away as I
was writing it.

I was overcome with grief. I am an only child, and now no parent is left.
All the suffering my parents endured. My father?s family suffered so many
deaths in pogroms. One of his aunts was mute her whole life, having
survived as a girl by staying absolutely silent, hiding under a bed behind
her older sister, who was killed by sword. My mother?s family, from
Vienna, so many lost to the concentration camps. After all that, just
little me.

And yet I was soon overcome with an even stronger feeling of gratitude. My
father lived into his late nineties, and got to know my daughter. They
knew and loved each other. They made each other happy.

Death and loss are inevitable, whatever my digital supremacist friends
with their immortality laboratories think, even as they proclaim their
love for creative destruction. However much we are pierced with suffering
over it, in the end death and loss are boring because they are inevitable.

It is the miracles we build, the friendships, the families, the meaning,
that are astonishing, interesting, blazingly amazing.

Love creation.



See also the 'Laudatory Speech by Martin Schulz:
http://www.friedenspreis-des-deutschen-buchhandels.de/819476/





#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime {AT} kein.org