www.nettime.org
Nettime mailing list archives

nettime-nl: Governance in Cyberspace
Felipe Rodriquez on Wed, 5 Nov 1997 03:43:57 +0100 (MET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

nettime-nl: Governance in Cyberspace


Hello,

Governance in Cyberspace (or what the EU calls the Information Society)
does not adapt to traditional power structures. These structures, that we
usually refer to as authorities, are in essence almost always regionally
bound; their authority and influence stops at the regions- or countries
border. One of the unique, and unchangeable, properties of Cyberspace is
that it moves over those borders, and thus in many ways rejects the concept
of local authority. 

Information in Cyberspace is distributed, and thus power is distributed.
There is no single entity that can change the way things work on the
Internet. But if there's a consensus about something, change happens. 

An example; The technical fundament of the Internet, the Request For
Comment (RFC) series, that form the underlying structure of protocol
standards, have been created mostly in a consensus process. The people that
created these standards where mostly volunteers and scientists, although
initially  some where commissioned by the US Advanced Research Projects
Agency (ARPA). These's RFC's today are basically created by Internet
Engineering Task force working groups, which are made up by people
interested in the topic; anyone can join an IETF working-group and
participate in the creation of a new Internet standard. There is some loose
coordination of these developments by the Internet Architecture Board,
connected to the Internet Society, but no-one can pretend to control the
course of events; in other words no-one can pretend to govern the Internet.
It is governed by broad consensus.

Restriction of information in Cyberspace has proved to be impossible, and
assuming that power and information are interconnected we could say that
restriction of power in Cyberspace to a government is impossible.

There's numerous examples that show how information flows in unrestricted
ways; the German government wants to prevent the circulation of a certain
document, called Radikal. The document is published on a web site in the
Netherlands, where it is not considered to be illegal. The German
Prosecutor General ordered German providers to block this page. Immediately
afterwards the banned document is copied to 50 places on the Internet, all
over the world. A year after the German attempt to censor the document,
Germans can still easily access it on the Internet unrestricted by
censorship. This example demonstrates the limits that are imposed on the
power of the German government; they could not prevent their citizens from
accessing a document the government did not want them to read. The power in
this case was distributed, at the expense of the power of government; it
was given to the German citizens who can now decide themselves if they want
to access and read this document or not. 
 
Another example; the government in Singapore is very anxious to control the
information it's citizens access on the Internet. To obtain a level of
control they've installed several gateways that are under the influence of
the government. There's also a number of mandatory proxies that filter
certain addresses. People inside Singapore have demonstrated that despite
the attempt to control access to certain information, this information is
easy to obtain; it proves to be impossible for the Singapore government to
control al the content on the Internet. There's now an estimated 1 billion
pages on the World Wide Web. It impossible to know the content of all these
pages and selectively filter them at the national proxy. Even if this level
of control would be possible for Singapore, there's a famous phrase used by
many on the Net; Internet routes around censorship. If Singapore would
accomplish a 'total-control' system, it would still be possible to route
around the control-mechanisms by using second-level proxy servers outside
Singapore, or so-called anonimizers. Not to mention remailers and other
technology. The Singapore example shows a government that tries to maintain
power in the traditional way; by restricting power. And reality shows that
it does not work, only the illusion of control is maintained.

The last example of free flow of information concerns PGP. PGP stands for
Pretty Good Privacy, this is encryption software one can use to protect
email and other documents. The encryption used in this program is so
strong, that it cannot be exported from the USA to other countries (export
of strong crypto is illegal in the US). But since the first version of PGP,
version 1.0, it has been available all over the world. People simply
downloaded the software from the US, and distributed it around the world.
Version 2.0 was created in Europe to avoid US export restrictions; this
version has also been distributed around the world. The latest version of
PGP, version 5.0, was created in the US by the company PGP Inc. In a matter
of days after publication of this software it was available in Europe,
despite US export restrictions. Later on people scanned in the source-code,
to create a legal' copy of the program (export of source code is not
illegal in the US, it is considered free-speech). Just a few days ago
version 5.5 of PGP was published in the US, and it is already available
outside the US. PGP has demonstrated that export restrictions of data and
software are impossible to maintain.

Governments are easily tempted to create systems of control on the
Internet, or to try and sanction policy. These attempts are taking place in
various shapes and forms, like self-regulation by the market, and
content-labelling initiatives.

In the EU a lot of effort is invested in self-regulation by the market. The
market, and the companies in the marketplace, are easier to control than
the chaotic mess of individuals, because economic instruments can be used
to force the players in the market into a certain direction. By self
regulation the authorities shift part of power to the marketplace in an
attempt to maintain order and stability in Cyberspace.
When self regulation is closely watched one sees that it comes down to
companies governing their customers. Self regulation in practice means that
an internetprovider has to prevent the questionable expressions of his
customers. Self regulation could also be called the privatization of
authority. The concept of self regulation of Internet by the marketplace
shows the decline of state-authority, and may, in an extreme situation,
lead to its downfall. 
Self regulation is dangerous; the market will always try to avoid obvious
risks. A customer that expresses dubious statements is a risk; a company
prefers to 'selfregulate' and prevent the expression of this customer,
rather than risk a legal procedure. The company may be held liable in some
way, and liability costs precious money. 
Self regulation can easily lead to restriction of established rights, like
the right on freedom of expression. This may sound extreme, but it is not.
Reality proves this argument; when the German providers where asked to
prevent the publication of 1 document originating in the Netherlands, they
did not have the technical means to single out this document. Thus they
obstructed an entire web server, with more than 10.000 other resources
on-line. They did this because they where afraid of prosecution; that would
harm their image and cost a lot of money in legal fees. Thus it was
preferred to 'self-regulate' and prevent the document, and 10.000 other
resources, to be accessed by German citizens. Not only did this act of
'self-regulation' infringe the right of free-expression, it also obstructed
the free flow of goods and information within the European Union. Similar
examples exist all over the world; providers prefer to disconnect their
customers before allowing them their right of free expression. Freedom of
expression is in many cases regarded as risk instead of a universal right.

Another attempt to establish more control on the Internet is the technology
of labelling and filtering. Content labelling and filtering techniques are
propagated to protect children, and to achieve 'downstream' filtering of
content (filtering by the user). Many question if such technology would
achieve a safer Internet for children; the Internet is basically an adult
zone, and very difficult to turn into a safe haven for children. But if
labelling and filtering technology is implemented, it could easily be used
for 'upstream' filtering; by the provider, government or other
organizations. A country like Singapore would of course consider such
technology to be a gift from heaven, because it allows better control of
what information the citizen accesses. It established the traditional way
of government control; restriction of information, and is thus an attempt
to keep things the way they where in the industrial era. 
Earlier in this message I stated that on the Internet things are achieved
by consensus, and not by authority; thus content-labelling and filtering
technology will not be implemented. Only a fraction of people want this
technology, most people do not. Most likely labelling will not happen
because the consensus is absent.


Traditional structures of authority increasingly lose their value in a
worldwide networked environment, no single regional authority can establish
total control in Cyberspace. Control is distributed to its citizens. In
many ways this is  beneficial for the citizen. In some ways it may not be.
But at this moment it is not clear where problems may arise, and where
action should be taken to avoid these problems. We are in a transition
phase, where only few things are clear. Only time can tell where this
development will bring us, but it will profoundly change the way governance
works.


	Felipe Rodriquez



PGP signature