geert lovink on Tue, 28 Aug 2001 14:51:01 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

[Nettime-bold] Roger Clarke: Gained, Paradise Re-lost


(posted with permission of the author on nettime. if you want to have the
reference list, please go the URL of the article. /geert)

From: "Roger Clarke" <Roger.Clarke@xamax.com.au>
Sent: Tuesday, August 28, 2001 9:58 PM

Paradise Gained, Paradise Re-lost:  How the Internet is being
Changed from a Means of Liberation to a Tool of Authoritarianism

Roger Clarke

Published Version of 25 August 2001

Prepared for submission to a Mots Pluriels special issue on 'The Net:
New Apprentices and Old Masters'

©  Xamax Consultancy Pty Ltd, 2001

This document is at http://www.anu.edu.au/people/Roger.Clarke/II/PGPR01.html

The last 50 years has seen very substantial application of the
technologies of personal and mass surveillance.  The next two decades
may see technologies of identification, and of location and tracking,
destroy individual freedoms.
Running counter to those tendencies are new forms of networking that
have been enabled by the Internet.  These have created the
possibility of enhanced freedom and power for individuals and social
groups.
Government authorities and corporations, however, are implementing a
substantial counter-reformation.  This is designed to overcome the
potential for net-based freedoms and to ensure the maintenance and
enhancement of social control.  This paper investigates the
dimensions of the current struggle.

Contents

1. Introduction
2. The Liberating Potentials of New Network Forms
 2.1 Early Computing and Communications Technologies
 2.2 The Internet of the Mid-1990s
 2.3 The Internet's Architecture
 2.4 Concurrent Developments in Liberating Technologies
 2.5 The Shared Hallucination That Is Cyberspace
 2.6 The Digital Era's Promise of Freedoms
3. The Denial of Freedoms Through the Re-Imposition of Social Control
 3.1 Shortpayments on the Promises
 3.2 Downsides of the Digital Era
 3.3 Challenges to Governments
 3.4 Challenges to Powerful Corporations
 3.5 Concurrent Developments in the Technologies of Surveillance
 3.6 Measures to Defeat Internet-Induced Freedoms
4. The Scope for Anti-Authoritarian Countermeasures
5. Conclusions and Prognosis

1. Introduction

The advent of the open, public Internet during the 1990s was greeted
with unbridled enthusiasm by a great many people.  It was perceived
to embody enormous potential to enhance the social, the political and
the economic dimensions of people's lives.

But freedoms are utilised for many different purposes.  Some are
largely good;  some are largely bad;  and most can be argued to be
either.  The Internet quickly came to be used for old crimes and old
misdemeanours, for new crimes and new misdemeanours, and for
communications whose content was not to the taste of society's
official and unofficial moral guardians.  From an untrammelled 'idea
in good standing', the Internet came to be seen as 'a mixed blessing'.

Meanwhile, giants were awakening from their slumber.  Any major
change has losers as well as winners, and an array of powerful
organisations came to the realisation that the losers could well
include them.  Considerable efforts have been invested during the
last few years to rein in the Internet's liberating potentials.

The purpose of this paper is to document the rise and imminent fall
of the freedom-enhancing Internet, and provide access to resources
that expand upon this vital issue.

The first section of the paper identifies the ways in which the new
network forms create possibilities for enhancement of freedoms.  It
provides an outline of the arrangements by which computers are
linked.  Although this may seem a narrow and technical topic, it is
actually enormously important to an understanding of the Internet's
impacts on power relations within social and economic combines.
Other aspects of 'the digital revolution' are highlighted.  This
provides the foundations for a brief discussion of the experiences
that people have when they use the net.  The section concludes by
identifying the many different kinds of freedoms that the Internet
appeared in the mid-to-late 1990s to offer.

The second section of the paper explores the counter-reformation that
has been stimulated by the exercise of those freedoms.  Powerful
institutions perceive their interests to be harmed by a wide range of
practices and potentials.  There has been a vast increase in the
number and the power of surveillance technologies during the last
half-century.  The paper examines ways in which those institutions
are conducting multi-pronged counter-attacks against many aspects of
the Internet and its use.

Finally, the paper considers ways in which the these authoritarian
tendencies might be resisted, and concludes with a
less-then-confident view of the survival prospects of the freedoms
that we have come to enjoy.

2. The Liberating Potentials of New Network Forms

The first part of the paper reviews the nature of the Internet, the
services it supports, and the virtual communities and societies that
use it, and identifies the freedoms it was perceived to be ushering
in.

2.1 Early Computing and Communications Technologies

Computers were invented around 1940, initially intended to perform
rapid computation.  Their numerical processing capabilities were
augmented with data storage, and they have been applied to the
processing of administrative data since 1952.  Initially, computers
were independent of one another.  Data was captured into
machine-readable form and fed into the machine.  Processed data was
output in both machine-readable form (for re-use in subsequent
operations), and in human-readable form.

The impact of standalone computing was to reinforce centralisation,
hierarchy, authority, and supplier power.  This was an excellent fit
to the Cold War era.  Not only communist but also 'free world'
governments were concerned about national security and control over
information.  Computing was intrinsically authoritarian and
freedom-threatening (Thompson 1970, Miller 1972, Rule 1974, Rodota
1976, Roszak 1986, Clarke 1988, Clarke 1994b).

 From the mid-1960s onwards, electronic communications converged with
computing.  The initial applications sustained the centralist model,
because they were shaped like a star, with a powerful computer at the
hub and mere 'dumb terminals' at the extremities.

The progressive miniaturisation of processors. however, combined with
mass production efficiencies, resulted in small devices becoming
available which had increasingly significant processing capabilities.
It is even arguable that the advent of the personal computer (PC)
made the end of the Soviet Union inevitable.  (In order to continue
competing with the West, the Soviets had to harness their people's
creativity and effort, which meant that PCs had to proliferate;  but
PCs, much more so than photocopiers before them, were samizdat
presses.  The truth would inevitably out, and the lies on which the
regime was built would bring it down.  Other authoritarian States
such as Cuba and Vietnam have not felt the same need to compete, and
hence have not succumbed.  China has only recently eased its
opposition to capital-driven economic development, and is already
feeling the pressure).

Through the 1980s, connections were achieved among the proliferating
PCs.  These linkages were used primarily by technology-enthusiasts
and social activists, and depended on clumsy one-to-one connections
over voice-grade telephone lines.  During the course of a mere
decade, the links were upgraded to more sophisticated technologies
and topologies.  Relatively small number of computers in fairly close
proximity to one another were linked using local area networks
(LANs).  Connections among greater numbers of computers over longer
distances were supported by wide-area networks (WANs).
Centralisation gave way to dispersion, and hierarchy and authority
were challenged.  Supplier power migrated from the manufacturers of
large computers (in particular, IBM) to the providers of software for
small computers (primarily Microsoft).

2.2 The Internet of the Mid-1990s

Means were then established for interconnecting these proliferating
networks, giving rise to inter-networking, and hence the Internet.  A
network comprises nodes (computers) and arcs (connections between
them).  A network is fragile if individual nodes are dependent on
only a very few arcs or on a very few other nodes.  The most robust
networks involve a large amount of redundancy, that is to say that
they comprise many computers performing similar functions, connected
by many different paths.

The Internet features multiple connections among many nodes, and is
therefore resilient when (not if) individual elements fail.  It also
scales readily, that is to say that it is capable of expanding
rapidly without the serious growing pains that other topologies would
suffer.  The Internet emerged from research-related activities funded
by the U.S. (Defense) Advanced Research Projects Agency - (D)ARPA -
during the period c. 1969-1990.  During the Cold War era, military
strategists were concerned about the devastating impact of neutron
bomb explosions on electronic componentry.  As a result, robustness
and resilience (or, to use terms of that period, 'survivability' and
'fail-soft') were uppermost in the designers' minds.  For historical
information about the Internet, see the resources at ISOC (1999),
notably Zakon (2001), Leiner et al. (2000), and Brand (2001).

The Internet is fundamentally an infrastructure (roughly equivalent
to the electricity grid, water reticulation pipework, and the
networks of track, macadam and re-fuelling facilities that support
rail and road transport).  The purpose of the information
infrastructure is to carry messages.  A simplified explanation of the
process whereby messages are transmitted is provided in Clarke
(1998a).

Further levels of facilities use those messages in order to deliver
services.  Some services are by computers for other computers, some
are by computers but for people, and some are by people and for
people.  Key services that are available over the underlying
infrastructure include e-mail and the World Wide Web (which together
dominate Internet traffic volumes), file transfer and news (also
referred to as 'netnews' and 'Usenet news'.  There are, however,
several score other services, some of which have great significance
to particular kinds of users, or as enablers of better-known services.

Until the early 1990s, the 'usage policies' that defined who could be
connected to the Internet, and what applications they could use it
for, precluded general use of the Internet by the public, or for
commercial purposes.  Around that time, telecommunications companies
took over from the U.S. government the provision of the 'backbone'
arcs that carry long-distance traffic, and the usage policies were
changed.  People perceived new opportunities to communicate with one
another, and business enterprises perceived new opportunities for
sales and profits.  Many more access-points were added, bandwidth was
increased, and new services emerged.  The explosion is generally
attributed to the release in early 1994 of the Mosaic web-browser,
the fore-runner of Netscape (Wolfe 1994b).

The explosion has continued to be exponential.  That has been a
result of a combination of more people joining in, and those who are
connected using it more.  The use of the Internet, like telephones,
fax-machines and mobile phones, is subject to what economists refer
to as 'the network effect':  as the number of users goes up, the
value to each user increases very rapidly because there are more
users with whom interactions can be conducted (Shapiro & Varian
1999).  Some elements of the growth are starting to flatten out (e.g.
more than half of the U.S. households that are likely to ever connect
are already connected), and all growth-rates within finite
populations have to eventually decline;  but new services in such
areas as the transmission of sound, image and video are likely to
result in the demand for bandwidth continuing to rise for quite some
time yet.

2.3 The Internet's Architecture

The concept of 'architecture' encompasses the elements that make up
an infrastructure, the relationships between them, and the manner in
which they are created and maintained.  From an engineering
perspective, the Internet comprises the following:
* computers, some of which perform various message-passing
functions (including routers and gateways), and some of which provide
and use services of various kinds;
* communications links;
* a suite of 'protocols' which define the rules of engagement
between nodes;
* software that is hosted by the computers and that implements
the protocols, comprising:
- client software, that uses the protocols to issue requests;  and
- server software, that uses the protocols to respond to requests;
* processes to create and amend protocols;  and
* governance mechanisms, including constituencies and means
whereby constituents can participate and/or be represented.  Key
organisations include the Internet Architecture Board (IAB), the
Internet Engineering Task Force (IETF), and the Internet Corporation
for Assigned Names and Numbers (ICANN).

Descriptions of Internet architecture and of the process by which the
Internet is defined and governed are available on the IAB, IETF and
ICANN sites, and in Krol & Hoffman (1993) and Clarke, Dempsey et al.
(1998a).

Particular aspects of the Internet's architecture have been
particularly important in enabling it to deliver value to its users.
These include the following:
* distribution of functions.  Many different functions have to
be performed in order that messages can be delivered and services
provided.  Those functions are not inherently centralised, but rather
are able to be distributed across a large number of participating
nodes;
* egalitarianism.  Some computers perform special functions of
a technical nature;  but most functions are capable of being
performed by many different nodes in the network.  Robustness is
achieved through decentralisation of responsibilities;
* distributed directories.  In particular, the various
directories that need to exist in order to determine possible paths a
message should take, are not stored in a single location, but
distributed across a large number of nodes;
* multiple connection paths.  Many nodes are not connected to
just a single other node, but to multiple other nodes.  This results
in a large number of possible paths that a message can use in order
to reach to its intended target, making the Internet resilient when
outages occur in nodes and arcs;
* collaborative control over the few centralised functions.
The functions that do need to have some degree of centralisation have
been subject to collaborative processes among peers, rather than
authoritarian control by one or more governments or corporations;  and
* collaborative design and standards-setting.  The technical
specifications that determine how the Internet is constructed and
operates have also been determined by collaboration within open fora,
and not by fiat of any governments or corporations.

2.4 Concurrent Developments in Liberating Technologies

The Internet, critical though it was, was only one of a number of
technological developments during the latter decades of the twentieth
century that tended to increase the availability of information.
These developments are usefully referred to by the generic term 'the
digital revolution', and included the following additional elements
(Clarke 1999g):
* the origination of digital objects.  New data objects can be
conveniently and inexpensively created in digital form (e.g. using
desktop publishing packages, PC-based graphic design tools, animation
techniques, and digital music generators);
* digitisation.  This enables the conversion of existing
materials into digital data objects (e.g. using scanners, optical
character recognition - OCR, digital cameras, and digital
audio-recording);
* digital replication.  This involves the near-costless copying
of data objects (e.g. using disk-to-disk copying, screen-scrapers,
and CD-burners so inexpensive that they sell as a consumer appliance);
* digital transmission.  Networks enable the very rapid
transmission of data objects, at marginal costs that can be so low as
to be not worth measuring (e.g. using modem-to-modem transmission,
CD-ROMs in the mail, and Internet services such as e-mailed
attachments, FTP-download, and web-download);
* access tools.  Inexpensive and widespread access to data
objects is available, from a variety of devices (e.g. PCs, personal
digital assistants - PDAs, mobile phones / cell-phones, public
kiosks, and web-enabled TV) in a vast variety of locations (including
the workplace, the home, Internet cafes, games arcades and shopping
malls);
* digital manipulation.  Data-objects can be readily massaged
(e.g. using word processors, and sound and image editing tools);  and
* computer-based data analysis.  Data can be readily processed
for such purposes as the compilation of statistics and the detection
of interesting cases (e.g. data-matching, profiling, data-mining and
pattern-recognition software).

Underpinning these tools are common standards and protocols, some of
which are proprietary and closed, but many of which are published.

2.5 The Shared Hallucination That Is Cyberspace

The Internet and digital technologies are just infrastructure and
tools.  In order to understand their impacts and implications, it is
necessary to appreciate what people have done with them.  The
Internet has created new kinds of place or space, within which human
actors are disporting themselves.

A variety of metaphors have been used in attempts to explain the
Internet.  My brief summary of important examples is in Clarke
(1994d).  See also Stefik (1996).  Some of these metaphors focussed
on the physical infrastructure;  but the more interesting and useful
images were concerned with its use, and the state of mind people
adopted while they were using it.

One of the first people to document social behaviour in the new
contexts was Rheingold (1994).  In 'The Virtual Community', he
described the patterns that arose in several pre-Internet information
infrastructures (bulletin board systems - BBS, The Well and
newsgroups), and in Internet Relay Chat (IRC) and Multi-User Dungeons
and Dragons gaming (MUDs).  Netizens around the turn of the century
were more familiar with e-mail, e-lists (sometimes called listservs
or listprocs), web-chat and ICQ.  See also Valauskas (1996).

What these various experiences of using the Internet have in common
is that the participants indulge in a 'shared hallucination' that
there is a virtual place or space within which they are interacting.
The term most commonly used for this is 'cyberspace'.  Its use is
testament to the pre-cognition of the artist:  William Gibson, the
sci-fi author who coined the term in 1983, was not a user of the
(then very primitive) Internet, and indeed only during the 1990s did
he even start using a PC.  Gibson popularised the word in his
important 1984 novel 'Neuromancer', and the fictional virtual world
that he created has since been depicted in even more compelling form
in films such as 'The Matrix'.

The neologism 'cyberspace' denies the concept of a physical 'place',
by using the more abstract idea of 'space'.  It couples 'space' with
the mysterious and threatening prefix 'cyber'.  This is also a modern
coinage, by Norbert Wiener in 1949, who created it in order to refer
to layers of control mechanisms.  These were thought to hold the
prospect of inventing intelligence from cascades of electronic
devices functionally equivalent to the governor in Watts'
steam-engine.

Gibson's virtual world was full of foreboding;  but despite the dark
overtones (or in ignorance of them), people cheerfully play in
cyberspace.  Its uses put varying emphases on content and
connectivity.  A general perception has been that content (typified
by use of the Web) would overtake connectivity (typified by e-mail);
but some feel that a balance between the two will always be evident
Odlyzko (2001).

Associated with cyberspace behaviour is an ethos.  That evident
during the 1990s derived from the values of the pioneers, and it is
unclear the extent to which the surge of newcomers will absorb the
old ethos or drown it.  Nonetheless, there are some expectations that
still appear to be commonly held among participants.  They are:
* inter-personal communications.  Many participants perceive
cyberspace primarily as a context to support interactions among
people.  Organisations are regarded as enablers, providers of resources,
and offerors of services, rather than as participants;
* internationalism.  Although the invention and initial
implementation of the Internet were a primarily U.S. activity,
participants mostly perceive it as a universal service.  As the
aphorism has it, 'there are no boundaries in cyberspace' (or at least
there are none inherent in the architecture), and connectivity and
content are technically open to anyone, irrespective of location or
nationality;
* egalitarianism.  Participants who are fulfilling particular
roles in some services have more power than others.  For example, in
the case of moderators of e-lists and forums, this extends as far as
blocking messages;  and in the case of MUDs, a user who controls the
space (a 'god' or 'wizard') can even exclude a particular identity or
IP-address from participation.  In most of the contexts created by
Internet services, however, users do not experience a hierarchy of
authority, and people behave as though they are, by and large, equal;
* openness.  Information about the Internet's fundamental
protocols and standards is readily accessible to those who seek it
out (although that may not be the case with particular services, or
with proprietary extensions to services);
* participation.  The Internet, and many of the services
available over it, embody no inherent barriers against active
participation by any party.  Moreover, many services arise from
collaborative activity among many individuals;
* community.  These factors result in a feeling, common among
many participants, that they are part of one or more e-communities;
and
* gratis services.  The costs involved in establishing and
running the Internet are hidden from the view of most users, and many
perceive it to be gratis.  In fact consumers do pay, in particular
through telephone bills, equipment purchases and software licensing
fees;  and a great many people and organisations absorb costs because
it suits their personal psyche or organisational business model to do
so.

In every case, the popular perceptions of cyberspace as being free
and open are partly misguided;  but they are part of the shared
hallucination.

2.6 The Digital Era's Promise of Freedoms

An expectation of greater freedoms has been one of the recurring
elements within electronic communities.  This was reflected in the
term 'electronic frontier', which invoked the image of the 'wild
west' of the U.S. in the early-to-mid 19th century, where pioneers
were relatively unfettered by law enforcement agencies.  The term was
used in the title of a significant association formed to defend
netizen's rights, called the Electronic Frontier Foundation (EFF)
(Kapor & Barlow 1990).  It has also been co-opted by other
organisations, such as Electronic Frontiers Australia (EFA).

The kinds of freedoms that net-users expect are documented in Clarke
(1997e).  One aspect of particular significance is the expectation of
greatly enhanced freedom of access to information, summed up by the
call-to-arms 'information wants to be free' Clarke (1999f).
Particularly since 1995, the efforts of millions of people have
resulted in an unprecedented volume of information being made
available, readily discovered and easily and quickly downloaded to
individuals' own machines, for display, analysis, printing,
re-publishing and adaptation.

The nature of the liberating influences need to be considered at
several levels.  At the level of individual behaviour, the following
were evident:
* freedom of information access, both on request and in the
context of formal teaching-and-learning (e.g. Perrone 1996);
* freedom of information expression, through the ease with
which any person could e-publish;
* assured archival of information, and reliable re-discovery of
it.  This was a result of spiders seeking out content, indexing it,
and making that index available for searching;  and the storage of
information in multiple locations, through 'mirroring'.  The effect
was an emergent, collaborative virtual e-library;  and
* perhaps, the inter-connection of personal appliances, and household
appliances (Odlyzko 1999).

At the level of social behaviour, the following were in prospect:
* greatly enhanced communities, because they could now be based
on affinity or common interest alone, whether or not the members are
in close proximity to one another;
* new forms of collaborative origination of intellectual
property works, and of interaction among works (e.g. Sewell 1997,
Hibbitts 1997);
* breakdown of the tyranny of distance.  Distance and time-zone
isolation disadvantage the inhabitants of nations that are far from
the majority of the world's population.  The net effectively brings
distant countries closer to the rest of the world (see, for example,
Titelbaum 1997).  This is globalism of a kind, although not the
economically motivated form pursued by the G8 and the transnational
corporations whose interests the G8 serves;  and
* support for diasporas to discover one another and virtually
unite, resulting in greatly enhanced prospects for the survival of
threatened cultures and languages.

At the level of economic behaviour, the following were evident or in
prospect:
* convenient e-payment mechanisms, including both identified
and anonymous forms (e.g. Levy 1994);
* consumer power, through more direct action against suppliers
whose behaviour was seen to be unreasonable (e.g. Andresen 1999);
* consumer power, through aggregation of demand (Hagel &
Rayport 1996, Hagel & Armstrong 1997, Hagel & Singer 1999);  and
* either a new economics (e.g. Kelly 1997, Goldhaber 1997,
Ghosh 1998), or at least a change in emphasis towards particular
kinds of economics (Shapiro & Varian 1999).  The new economics might
be more or less advantageous for large-scale corporations.  This is
because existing middle-men are under threat of 'disintermediation'
through direct selling by manufacturers to the ultimate consumer;
but, on the other hand, cyberspace creates opportunities for
're-intermediation';

At the level of political behaviour, the following were in prospect:
* greatly enhanced freedom of thought, as a result of the
increased information flows;
* significantly reduced scope for governments to manage
information flows;
* as a result of the reduced capacity for governments to
manipulate the populace, increased freedom of action for people;
* voter power and enhanced capability to participate in
political activity, through e-democracy of one form or another (e.g.
Ronfeldt 1992, Schwartz1994, Morrisett 1996, Evans 2000);  and
* an outlet for citizens in un-free countries, whereby the
iniquities of the regime could be published to the world, and
political opposition could be organised (e.g. Pantic 1997, Bennahum
1997, Krebs 2001).

The person who probably encapsulated the promise of the Internet most
effectively, and certainly most famously, was John Perry Barlow
(1994, 1996a - "We will create a civilization of the Mind in
Cyberspace", and 1996b).  Politicians who hailed the prospects that
the Internet offered included the European Commission (Bangemann
1994), and U.S. Vice-President Al Gore, who made much of the
'information superhighway' metaphor.  For an analysis of Gore's
actual contributions, see Wiggins (2000).  See also Clarke (1994d).

Doubters of course existed (e.g. Lappin 1995, Stoll 1995, Floridi
1996, Greenleaf 1998).  But the promises were not mere rhetoric.
Although the excited claims required some qualification, considerable
progress was evident in every one of the instances identified in this
section.

3. The Denial of Freedoms Through the Re-Imposition of Social Control

This part of the paper shifts the focus from the hopefulness of the
recent past to the far less encouraging patterns of the present.  It
catalogues the many factors that are at work to prevent the
liberating potentials of the Internet bearing fruit.

3.1 Shortpayments on the Promises

Even among enthusiasts, misgivings have been apparent about whether
the promise of the Internet can be fulfilled.  These include
questions about the extent to which it is feasible to achieve the
articulation of the alternative economic patterns, so as to supplement
or even supplant the incumbent conventional capitalist processes
(e.g. Stalder 1999).  In the aftermath of the collapse of the dot.com
bubble around 2000, the debate about whether there are such things as
'new business models' rages on.  Further expressions of concern about
misfit between social need and technological reality are to be found
in Brown & Duguid (2000) and Borgman (2000).

I expressed in Clarke (1994c) concerns about the 'the limited focus
of debate'.  I drew attention to the way in which discussions about
information infrastructure were using a vocabulary that reflected
historical relationships between people and social institutions,
rather than the relationships that are more appropriate to the
present and the future.  Key elements of that vocabulary included:
* 'provide' and 'provider', and 'deliver' and 'delivery' (both
of which were implicitly performed by corporations and government
agencies, not by individuals);
* 'inform' (implicitly by authorities to targets, not by anyone
to anyone);
* 'customer', 'consumer' and 'user' (implicitly as targets of
larger entities);  and
* 'choice' (implicitly among a limited range of pre-set alternatives);

I stressed at the time the vital needs for a participative approach
rather than a master-servant framework, and proposed the following
principles for the design of information infrastructure:
* openness, in the sense of free choice by participants of
providers of content, services, delivery and platform;
* 'relative bandwidth symmetry', to ensure that there was no
inherent bias in the infrastructure that precluded individuals from
performing provider roles;   and
* plurality and dynamism in the design process, and the
avoidance of a narrow engineering approach to devising the
architecture, because there were so many different services involved,
so many participants, and so many different perspectives, some of
which inevitably conflict.

Seven years later, there is little evidence of these principles being
applied, and every sign that corporations and government agencies
perceive the Internet as just a broadcast medium with a low-bandwidth
back-channel, and themselves as projectors of messages at a largely
inert and manipulable public.

A further disappointment has been the volatility of web-site contents
and domain-names, and the very patchy extent to which electronic
archives have actually arisen.  The web-sites of disestablished
agencies such as the U.S. Office of Technology Assessment (OTA) tend
to simply disappear, with mirroring of the agency's publications
performed by anyone who has downloaded them in anticipation of their
disappearance (and who is prepared to breach copyright by placing
them on a web-site).  In Australia, government documents are
routinely removed when a change in government occurs, through what
appear to be active measures by the incoming government to deny
access to historical documents.  This creates serious difficulties
for policy research.

Another concern has been the so-called 'digital divide', which refers
to the dependence of Internet access on local infrastructure,
personal capabilities and motivation, and the consequential
inequitable scope for people in remote and economically deprived
regions to participate.  The term relates to the 'north-south' divide
between rich and poor nations, although some commentators have
co-opted it to refer only to economic and social inequalities within
individual countries, primarily the U.S.A. (e.g. Kahin & Keller 1995).

3.2 Downsides of the Digital Era

Change of any kind disturbs established patterns, and a fundamental
shift like the explosion of the Internet in the mid-to-late 1990s
inevitably had substantial impacts that harmed some people's
interests, and caused many people concern.

Although some people have taken advantage of the Internet to perform
their work in or close to their homes, others miss the social aspects
of conventional workplaces.  Another concern has been the emergence
of sub-contracted tele-working, with corporations taking advantage of
telecommunications to reduce not only their infrastructure costs but
also their salary-costs, and hence the income-levels of some
categories of employees (such as call-centre staff).

Another area of concern for some people has been the undermining of
intellectual property.  Some goods and services have become easily
replicable as a result of digital technologies.  As a result,
individuals who originate and sell such works might see their
income-stream reduced.  This applies to artists of various kinds who
both originate and sell to the work's ultimate consumer.  It also
applies to those whose works are processed through intermediaries,
although in that case the originator generally gains only a small
proportion of the total revenue.  There are counter-arguments,
however, to the effect that intellectual property is untenable in the
digital era, that the long production-lines for publishing are no
longer needed, and that new business models will rapidly emerge
whereby originators will still be rewarded.  This critical issue is
further considered in sections 3.4 and 3.6 below.

Predecessors to the Internet such as the Usenet news network had
already involved many instances of misbehaviour (e.g. Rosenberg
1993), and emergent applications of the law to human behaviour on the
net (e.g. Auburn 1995).  As early as April 1995, I compiled a
substantial list of what I referred to as 'dysfunctional human
behaviour on the Internet' (Clarke 1995a).  This included accidental
dysfunctionality (information overload, rumour and accidental
misinformation, negligent defamation, persistence, minor plagiarism,
inadequate care with data, and unauthorised spidering), socially
aggressive dysfunctionality (intentional misinformation, flaming,
intentional defamation, harassment, mail-bombing, obscenity,
incitement, impersonation and surveillance), economically aggressive
dysfunctionality (spamming, advertising, promotion and soliciting,
secondary use of data, serious plagiarism, abuse of intellectual
property rights, breaking into systems, viruses and worms, and
security breach) and avoidance dysfunctionality (circumvention,
anonymisation and obscuration).

Harassment, for example, extends beyond the conventional problems of
aggressive language and persistent unwelcome communications, through
electronic stalking, to an account of 'a rape in cyberspace' (Dibbell
1993).

Freedom of speech has a counterpoint in freedom to indulge in
salacious gossip about individuals, rumour-mongering, or indeed
merely publishing a true statement, all of which can constitute
defamation in one jurisdiction or another.  An early, contentious
example was the book published by Mitterand's doctor very shortly
after the sometime French President's death in 1995.  More recent
examples have been fought out between parties in the commercial
realm, in particular Macquarie Bank v. Berg, and Gutnick v. Dow Jones.

An e-mail message broadcast to a long list of addressees is very
similar to other forms of publication, because it involves 'pushing'
information by one person to one or more other people.  The challenge
currently before the courts is to construct a sensible application of
current laws to the quite different pull-technology of the web.  For
a court to rule that placement of a document on a web-site
constitutes publication in every jurisdiction from which it is
subsequently downloaded would subject every web-site owner to the
vagaries of in excess of 300 jurisdictions;  and if that were the
case, then the chilling effect on free speech would be dramatic.  In
Macquarie Bank v. Berg, the judge refused to grant an interim
injunction for that reason (Parry 1999).  See also Godwin (1996) and
Martin (2000).

There has also been disappointment about people's limited
exploitation of the net's capacity for openness and diversity.  The
consumer mentality is strong, and many people content themselves with
visits and re-visits to commercial web-sites such as Disney and sites
linked with popular TV programmes.  Habit is strong too, and many
people appear to limit themselves to specific e-communities rather
than sampling the available richness.  Commercial offerings are
exacerbating this by offering filtering and personalisation features
that narrow users' experiences to pre-selected categories.  Rather
than encouraging expanded worldviews, the Internet has even been
accused by some of fostering social division and extremism.

Another downside that emerged early in the life of the Internet was
attacks on Internet services.  Many of these were and are performed
because it demonstrates the attacker's skill and daring;  and as a
result the term 'exploits' is commonly used to refer to such attacks,
and the term 'malware' usefully describes software that performs, or
enables individuals to perform, such exploits.

A common form of malware is viruses (faqs.org 2001).  A virus is a
program that attaches copies of itself to other programs.  It may
perform no other function than its own replication.  It may, however,
'carry a payload', i.e. perform some additional function.  This may
have nuisance value (e.g. displaying a funny face on the screen of
the infected computer), or serious consequences (such as the deletion
of the directory that records the location of files stored on the
infected machine).  In order to avoid early detection, viruses are
commonly designed to delay the performance of functions other than
replication.

Viruses had long been transmitted via diskette.  They had also taken
advantage of pre-Internet data communications, such as programs
stored on bulletin boards.  A virus that propagates via
telecommunication means is commonly referred to as a 'worm' (after
Brunner 1975).  The Internet provided a greatly enhanced capability
for worms to infect large numbers of machines in a short period of
time.  The first celebrated instance of a successful worm was that
released by Robert Morris in 1988 (Reynolds 1989).  Because of the
naiveté of users, inadequate care by some information systems
professionals, and the culpably insecure state of software
distributed by some vendors, worms continue to flourish, with
multiple major security alerts occurring each year.

Beyond viruses and worms are other forms of attack.  These include:
* 'break-ins' to computer services and to computer-storage such
as databases of credit-card details (usually referred to as
'hacking', but more sensibly described as 'cracking') ;
* actions taken once computer services have been broken into,
such as the defacing of web-pages (most commonly their replacement
with an alternative page), and the extraction of copies of files;  and
* 'denial of service' attacks.  These commonly involve the
bombardment of a server with so much traffic that it is unable to
cope with the volume and becomes unusable.  The source of a simple
attack is readily traced, and hence sophisticated attackers take
advantage of weaknesses in other people's servers to cause the
traffic to be generated by multiple servers.  This is referred to as
a distributed denial of service (DDoS) attack.

Servers have proven to be highly susceptible to even quite simple
attacks.  This is because a great deal of software is distributed in
an intrinsically insecure state, especially those of the supplier
dominant in several segments of the market, Microsoft.  In general,
software is not deemed to be a product for the purposes of product
liability law.  The result is that actions against suppliers of
software products have to be by means of torts such as negligence,
which have much higher threshholds of tolerance of supplier and
product deficiency than do product liability laws;  or through
'deceptive conduct' provisions, which seldom provide aggrieved
parties with satisfaction.  In short, suppliers are subject to
seriously inadequate disincentives against distributing inherently
insecure software.

Another concern felt by many people has been the undermining of
existing censorship controls.   The term 'censorship' is used here in
its generic sense, to refer to the prevention of public access to
information of any kind, for any reason.  Examples of such content
that has arguably become more readily available as a result of the
Internet include the teaching of bomb-making, and possibly of means
of conducting insurrection;  the incitement of violence, particularly
racially-related violence, including the sale of Nazi memorabilia
(see, for example, BBC 2000);  and access to pornography that
breaches censorship laws within a particular jurisdiction.

Although there is a firm basis for such concerns, discussions are
often heated, and judgement impaired.  The extent of abuse is often
exaggerated, and non-infringing content and actions are often caught
up in the process.  For example, little evidence appears to have been
advanced that the availability of instruction in violence has greatly
increased since the advent of the Internet;  and a study of
recruitment by extremist groups on the Internet found little to
suggest that a serious new problem had arisen (Ray & Marsh 2001).  On
the other hand, restrictions on the availability of sex-related
materials to minors, by means of 'behind-the counter' storage and
'adults-only' shops, are not as readily achieved on the Internet.

Moreover, what many people refer to as pornography may infringe their
good taste, but is in many jurisdictions not actually illegal.  In
addition, a great deal of misinformation has been peddled in relation
to pornographic materials thrusting themselves in front of children.
The 'porn in kiddies' bedrooms' furore arose from a very
poorly-conducted, and arguably fraudulent, undergraduate assignment
publicised on the cover of Time magazine as thought it were fact
(Elmer-DeWitt 1995).  For negative reviews of the article, see
Hotwired (1995).

The Internet is not an unmitigated 'good thing', and its downsides
need to be recognised and addressed.  Many of the arguments offered
by sceptics have, however, been found to be exaggerated, and some
have even been unfounded.

3.3 Challenges to Governments

The Internet's incidental social impacts motivated some calls for
regulation, and some extensions to law have resulted.  On the other
hand, vested interests quickly discovered that the changes had
negative impacts for them which they regarded as very serious.  This
section considers effects on governments, and the next those on large
corporations.

One concern has been the undermining of the rationale underlying
existing laws.  The particular case of copyright law is considered in
the following section.  In the area of on-line gambling, many
governments' activities have been decidedly hypocritical.  They are
subject to the fiscal imperative of sustaining taxation revenues from
existing licensed gambling venues, and the need to appease powerful
commercial lobby-groups.  They have accordingly been forced into the
prohibition of Internet gambling, even though some of them had the
opportunity to address the social ills by attracting on-line casinos
into their jurisdiction while imposing consumer safeguards (Schwartz
1995, Clarke, Dempsey eta al. 1998c).

A further concern is the unenforceability of existing laws.  One
reason for this is the difficulties confronting investigation of
criminal acts, and the gathering of information with evidentiary
value (Clarke, Dempsey, et al. 1998b).  Examples of activities that
have created challenges are the use of various Internet channels to
promote fraudulent schemes, and for what amount to unlicensed
investment newsletters and stock-price manipulation.

Criminal activities are difficult to pursue when they are
extra-jurisdictional (taking place outside the reach of the
jurisdiction in question) or trans-jurisdictional (that is to say
that parts of a transaction take place in two or more separate
jurisdictions).  It is especially inconvenient where one of the other
jurisdictions has laws that make the acquisition of investigative or
evidentiary information difficult.  In the case of the Internet, the
geographical location of an event may be very difficult to determine,
and hence some actions may be effectively supra-jurisdictional such
that no court can reasonably claim to be able to consider the case.

One effect of the Internet has been to lower the cost of locating
transactions in convenient jurisdictions such as 'tax-havens', and
hence of so-called 'regulatory arbitrage'.  As a result, they are no
longer accessible only by corporations and wealthy individuals.  See
Froomkin (1997), Kahin & Nesson (1997) and Clarke (1997a).

On the other hand, it seems unlikely that cyberspace will remain a
lawless electronic frontier for very long.  Lessig (1999) argued that
there were two kinds of what he called 'code':  'East Coast Code' or
statute law;  and 'West Coast Code' or "instructions embedded in the
software and hardware that make cyberspace work" (p.54).  His
argument was seriously deficient, because in defining 'code' as only
the software and hardware (p.6), it overlooked many aspects of the
architecture of the Internet that were outlined in section 2.3 above.
Although Lessig did later use the term 'architecture' (pp. 87,
100-108, 220-221, 236-237), his usage remained too imprecise.  What
Lessig did achieve, however, was to highlight that the Internet's
architecture embodies some capacities for control, and that it can be
adapted to embody more.  This will be further discussed in a later
section.

Another consideration is that communities within cyberspace exhibit a
variety of self-regulatory mechanisms, and these also represent a
kind of law.  The possibility also exists that a new internationalist
jurisdiction may emerge, specifically to deal with behaviour in
electronic contexts.  This would be a serious challenge to
governments, not only because it would compromise their sovereignty,
but also because of the difficulties of definition of the boundary
between geographical and virtual jurisdictions.

Beyond the question of the rule of law, the Internet is perceived by
some people as a threat to national sovereignty and cultural
integrity.  This argument goes along the lines that the Internet is
heavily dominated by the U.S.A., variously in terms of the
technology, the services and the content;  and Internet traffic
continues to be heavily dominated by the English language.  The
Internet therefore imposes foreign values, effecting yet more
intrusions into the lingual and cultural integrity of countries and
societies whose first language is other than English.  It is
therefore a weapon of cultural imperialism.

Finally, the freedoms that the Internet has given rise to include
some that, at least from the perspective of governments, are of an
undesirable nature.  Signal among these 'undesirable freedoms' are
the following:
* freedom of access to governmental and parliamentary
information.  This exposes dubious practices to public view.
Although considerable progress has been made in some countries,
access continues to be patchy, even within the U.S. Congress (Corn
2000);
* the freedom to contest official history.  Governments
inevitably indulge in historical revisionism, or portray aspects of
history in a new light, or 'put their own spin on' past events.  The
Internet tends to make the credibility of a government's depictions
subject to greater challenge than before.  Two examples with contrary
implications are attempts to maintain the Japanese version of the
emancipation of Korea and northern China in the 1930s, and the
preclusion of denial of the Holocaust.  This challenge arises from a
combination of:
- the freedom of originators and adaptors of content to publish
their materials widely;  and
- the inbuilt resistance to attempts to suppress content,
because of the ease with which copies are reticulated, copies are
stored in multiple locations (or 'mirrored'), and copies are
discovered;
* the freedom to communicate privately.  National security and
law enforcement agencies are used to being able to intercept and
understand messages and conversations.  They have successfully
constrained freedoms of the entire population through 'mail covers'
and 'wire-tapping', because a (small) proportion of messages and
conversations, if captured, provide intelligence about or evidence of
serious criminal behaviour.  Various aspects of the Internetchallenge
that cherished power;
* the freedom to utter statements that are anti-government or
seditious.  This has been the target of action not only by
governments of un-free countries such as the China, but also of
authoritarian governments in developing countries such as Malaysia,
and advanced economies such as Singapore (e.g. Barme & Sang 1997,
Tao 2001);  and
* e-democracy.  This creates real risk of an increase in
participative democracy at the cost of power-relationships embedded
within contemporary representative democracy.

3.4 Challenges to Powerful Corporations

Corporations, meanwhile, have perceived a rather different collection
of ills arising from the Internet.

One has been challenges to corporate brand-names and images,
approximately equivalent to the defamation of corporations.
Organisations that have been subjected to close attention by consumer
and public interest advocates in a movement usefully referred to as
'corporation-watch' include McDonalds, Nike, Monsanto, Coca-Cola, and
Nestlé.

Corporations tend to regard the community aspects of the Internet as
a resource to be plundered once it becomes apparent how.  One aspect
of the e-community ethos concerns them, however, because they
perceive it as socialism reinvented in a dangerous form.  This is the
so-called 'open source' phenomenon.  The open source movement argues
that software should be licensed without cost, but subject to
conditions that ensure that bugs are reported, fixes publicised, and
enhancements made available subject to similar conditions Stallman
1992, Raymond 1998, Moglen 1999, Bezroukov 1999).  Open source is a
concern for large corporations in two ways:
* it involves considerable activity that represents a threat to
sales by corporations;  and
* it is supported by economic arguments that justify a great
deal of economic activity being effectively performed without the
profit motive playing a significant role.  This is seen by champions
of the corporate sphere as an affront, because it represents a denial
to the legitimacy of capitalism.

Whereas the 'cooperative' movement is an object of derision among
corporate executives, open source is threatening to them.  Despite
those concerns, many small companies appear to be operating
effectively creating and using open source software, and a few large
corporations are investigating it, including IBM in the context of
the operating system Linux.

Another important Internet aphorism and battle-cry is 'Information
wants to be free' (Clarke (1999f).  The open movement has been
applied to information as well as source-code, and has thus given
rise to arguments for open content, in such areas as technical
protocols and standards but also education and the law.  See, for
example, the Australasian Legal Information Institute (AustLII),
which makes freely available the corpus of primary and secondary law
of Australia and New Zealand, and whose underlying philosophy and
software are being applied in countries as diverse as Ireland and
Mongolia.

Another problem for some corporations is the difficulties for the
broadcast mode of communication that are presented by the Internet in
its current manifestation.  Broadcasting comprises one-way
dissemination of undifferentiated content, with no or a very limited
back-channel to the broadcaster.  Radio and television represented
great progress respectively about a century and a half-century ago.
The potentials of contemporary telecommunications technologies, and
the expectations of a now far-better-educated public, go far beyond
them.  But leaving the broadcast era behind is contrary to the
mind-set of the one-at-many direct marketers whose emergence
broadcast media stimulated.  After less than a decade of the Internet
era, it is still perceived by large consumer marketing corporations
to be contrary to their interests.

The most nervous reaction among corporations, however, has been that
of publishers, who have come to perceive massive threat to the
intellectual property, especially copyright, which they have garnered
over the years, and which is thebasis of many micro-monopolies, and
hence super-profits.  The impact of the digital era on the
income-streams of artists was noted earlier.  The publishing of
magazines, books, music and films has long been industrialised.  The
copying and distribution of digital media by parties other than the
copyright-owner places under threat the revenue, profits,
employment-levels and even survival of corporations in those
value-chains.

Disputes of this nature have accompanied a succession of
technologies, long before the advent of the Internet.  In recent
times, for example, the explosion of video-cassette recorders (VCRs)
resulted in attempts by copyright-owners to have laws enforced that
were clearly no longer workable.  Once the smoke had cleared, and the
industry's arguments carefully assessed, the practicability and
reasonableness of their proposals were found wanting.  Instead,
copyright laws were amended to actually permit people to lawfully do
what they were already doing (recording broadcast copyright works,
for the purposes of personal use, within-home entertainment, and
'time-shifting').

Nonetheless, large corporations dependent on copyright have pursued
the line that the use of the various capabilities that have become
available during the digital era represent theft.  Because of the
qualities of books, the book-publishing industry has not been
significantly affected during the first decade.  The shrink-wrapped
software industry, dominated by Microsoft and spearheaded by the
Business Software Alliance (BSA), has long regarded itself as being
threatened by the digital era.  The film industry has also been very
active, particularly in attacking people and companies that have
cracked the codes designed to protect digital video/versatile discs
(DVDs) from being copied.

A further development has been the emergence of so-called
'peer-to-peer' (P2P) file-sharing tools Clarke (2000d).  Most of the
attention to date has been attracted by Napster (e.g. Mann 2000).
This was a centralised catalogue.  The files that were the subject of
the catalogue were not, however, stored centrally, but rather on very
large numbers of servers throughout the world.  They were primarily
music in the MP3 format, and many of the files allegedly involved
breach of copyright.  At its peak during the second half of 2000, the
volume of activity on Napster showed arguably the fastest growth of
any metric on the Internet up to that time.

Unsurprisingly, the music-recording industry, which is dominated by a
few very large corporations and represented in particular by the
Recording Industry Association of America (RIAA), perceived its
interests to be seriously threatened by Napster.  Because there was a
central catalogue, a 'choke-point' existed, and action by RIAA
between 1999 and 2001 to gain an injunction against the operator of
the catalogue could, and eventually did, result in the collapse of
the unregulated, copyright-breaching service (Clarke 2000d).

Such a strategy is unlikely to work against a P2P scheme that
features a distributed catalogue.  By the time of Napster's demise,
there were already more than a dozen contenders, the best-known of
which was Gnutella (so-called because it spreads easily ...).  In
early 2001, despite the much greater difficulties involved, RIAA had
already commenced its pursuit of these services as well, in the hope
of somehow sustaining its control over the market for copyrighted
music.  The second prong of its attack, coded the Secure Digital
Music Initiative (SDMI), is a desperate search for some form of
cryptographic or stenographic mechanism that could prevent
unauthorised use and copying of files.

The significance of P2P extends far beyond music.  Every category of
object that can be expressed in digital form is capable of being
reticulated and mirrored using a combination of distributed database
and distributed directory approaches.  Revenue-generating images and
video are forthcoming battlefields.  From the social and political
perspectives, however, the most critical aspects are not
entertainment, but information;  and high-volume, time-critical
delivery such as films are less relevant than simple text formats.

3.5 Concurrent Developments in the Technologies of Surveillance

The Internet was only one of a number of relevant technological
developments during the latter part of the twentieth century.  A
considerable collection of technologies has emerged whose purpose is
to monitor individuals and populations in a cost-effective manner.
'Dataveillance', or the surveillance of people through data trails
that they leave behind them, was examined in Clarke (1988).

The underlying requirement is individual data trails arising from
particular kinds of transactions, which can be consolidated and
subjected to analysis.  Examples of data trails include ATM
withdrawals, credit-card transactions, and records generated by
telephone calls, building access and the use of toll roads (Clarke
1996a).

Further technologies that have emerged during the closing decades of
the century include the following:
* ubiquitous computing.  This involves the integration of
mobile telephones, portable computers, and personal digital
assistants;  and 'computer-wear' such as watches, rings, brooches and
spectacles;
* video-surveillance, incorporating pattern-matching to
recognise such things as the codes on vehicle number-plates (e.g.
Dixon 1995, PI 1999);
* digital rights management technologies.  This comprises
active measures that invade the private space of users of electronic
resources (Greenleaf 1998, Greenleaf 1999, Clarke & Nees 2000,
Gilmore 2001);
* identification technologies (Clarke 1994e) and identity
authentication technologies (Clarke 1999d), including chips embedded
in carriers such as smartcards (Clarke 1997f), digital signatures
(Greenleaf & Clarke 1997, Clarke 2001g), and biometrics (Clarke
2001f);  and
* the combination of identification and locator technologies,
such as the Global Positioning System (GPS), into personal mobile
appliances, such as mobile telephones, in order to produce location
and tracking infrastructure for ubiquitous mass surveillance (Clarke
1999h).

The surveillance society has arrived, without people really noticing
(Clarke 2001b).

Suppliers of goods and services have concocted imperatives that
demand the development and application of these invasive
technologies.  In addition to technological determinism ('I can
therefore I must'), drivers for these privacy-hostile and
freedom-hostile applications have also included marketing determinism
('I can imagine they need it, so I must make them need it'), fiscal
determinism ('everything must cost less tomorrow than it did today'),
and what might be termed 'original sin' determinism ('people can't be
trusted', or, as law enforcement agencies are reputed to express it,
'10% of the population is inveterately honest, 10% is inveterately
dishonest, and with the other 80% it depends').

During the second half of the twentieth century, the public sector
has been both monolithic and multi-partite:  individual agencies have
exercised independence from others when it suits their ends, but
clubbed together in other circumstances.  The private sector has been
similar, with competition complemented by collaboration within
conglomerates, joint ventures, strategic partnerships, and industry
associations.  The public sector has, among its other roles, been
expected to regulate the behaviour of corporations;  but has also, of
course, used it as a supplier of goods and services.

The power of the organisations that are able to deploy surveillance
technologies has been enhanced by the recent tendencies towards
information-sharing among organisations across both the public and
private sector, and towards outsourcing of 'non-core functions' in
order to produce a matrix that has blurred the once-distinct
boundaries between the public and private sectors.

The power-sharing model for the twenty-first century appears to be
being based on three models.  One is the U.S. aero-space-defense
complex of the latter part of the twentieth century, with its close
linkages between companies and public sector agencies.  A second is
the Japanese keiretsu, the horizontally and vertically linked
industrial structure of post-war Japan;  and the third is the Korean
chaebol - conglomerates characterised by close control, authoritarian
management, and centralised decision making.  In the emergent scheme,
large corporations and government agencies get to share power;  but
citizens and consumers do not.

3.6 Measures to Defeat Internet-Induced Freedoms

Governments, both in relatively free and relatively un-free
countries, together with major corporations whose profits were
dependent on copyright, have all perceived the need to take action
against the excess of freedoms wrought by the explosion of the
Internet.  The fightback has been sufficiently vigorous to be worth
referring to as a counter-reformation.  This section considers
measures undertaken firstly by governments and then by corporations.

During the mid-1990s, a series of moderately hysterical
over-reactions occurred, with law enforcement agencies exceeding
their powers or applying their powers in ill-judged ways, and
Internet service providers misunderstanding the law's requirements of
them.  An early example was the Bavarian police intervention with
Compuserve (Kunze 1996, Lewis 1996).  There is little doubt that
adaptations of the law are necessary and appropriate;  and that
collaboration among the law enforcement agencies of multiple
countries is essential.  The question is the extent to which freedoms
of the law-abiding majority are permitted to be trampled on in the
attempt to rein in those activities of a small minority that are
actually criminal.

A recent study of control by authoritarian regimes over Internet
connectivity and usage concluded that there was already a range of
ways in which the liberating potentials of the current Internet could
be controlled by a determined government (Kalathil & Boas 2001).

Governments of relatively free countries are subject to
constitutional and parliamentary constraints.  To ovecome the
hurdles, they use law and order issues to generate public support for
authoritarian measures.  The opportunities range from the much-used
terrorist attack threat, through the risk of theft of fissionable
materials, and the increased incidence of burglaries by drug-addicts,
to the theft of credit-card details on the Internet.  (The epithet
that sums this up is that 'all that is necessary to convert a liberal
to a conservative is the theft of their child's bicycle').

National security and law enforcement agencies have furthered their
aims by utilising each new wave of virus transmission over the
Internet, of web-site 'hacking', and of distributed denial of service
attacks.  Particularly in view of the convenient timing of many of
the events, it has even been suggested that the sources of some of
the attacks may be people at least tolerated by, and perhaps even
directed and funded by, national security and law enforcement
agencies, especially those of the United States (e.g. Madsen 2001).

One approach to tightening controls is to impose adaptations to
Internet infrastructure.  A key example has been proxy-servers for
the purpose of filtering prohibited materials such as political
speech treated by the regime as seditious (e.g. in Singapore and
China) and illegal pornography (e.g. in the U.S.A. and Australia).
These impositions are highly unpopular with many users and service
providers, not only because they deny freedom of access and choice,
but also because they impose overheads, they are only partially
effective, and they involve considerable collateral damage (such as
the exclusion of breast cancer information in the process of blocking
sources of pictures of naked women intended to provide sexual
excitement).

The creation of 'computer crimes' pre-dates the explosion of the
Internet.  The pursuit of hackers has been conducted with energy, if
not always with precision and commensurate force (e.g. Sterling 1992).

The increasing Internet-dependence of the economy, businesses,
utilities, government, law enforcement and national security has
combined with poor-quality software to result in concerns extending
beyond mere susceptibility to vandalism to the robustness of the
(national) information infrastructure.  See, for example, Schwartau
(1996) and Clinton Administration (1997).  Security threats are now
being routinely used by national security and law enforcement
agencies as justification for increased surveillance capabilities
over Internet traffic and content.  Although spear-headed by the
United States, this movement extends to many other governments by
means of bilateral relationships and multilateral clubs of such
agencies.

The fightback has seen intemperate expansions of the powers of
national security and law enforcement agencies.  These statutes have
generally extended the criminal law to encompass particular kinds of
actions in relation to computers and networks.  In addition to
providing reasonable clarifications and extensions, they have
included seriously privacy-invasive and freedom-threatening measures.
These have included the criminalisation of activities that are
mainstream behaviour, in some cases with compromise to the need that
the prosecution prove criminal intent;  and removal of constraints on
agencies in a manner that invites abuse.  Examples include the U.S.
FBI initiative code-named Carnivore (Blaze & Bellovin 2000), the U.K.
Regulation of Investigatory Powers Act 2000 (popularly referred to as
the RIP Act), and the Australian Cybercrime Bill 2001.  The measures
have been passed through multi-governmental bodies (such as the
Council of Europe Cybercrime Convention), in order to coordinate the
provision of national laws, and acquire the gloss of international
imprimatur.

Beyond changes to the law, opportunities for control arise because
various aspects of the Internet are not quite as distributed as is
often assumed.  A salutary lesson was provided by Kashpureff's
demonstration of the fragility of the domain name system or DNS
(Diamond 1998).  Authorities in some countries have become adept at
the exploitation of 'choke-points' in order to strangle the net's
liberating influence (Mann 2001).  The Singapore Broadcasting
Authority, as the relevant agency within a small island nation with
high levels of control over all aspects of the economy and society,
has been readily able to force Internet traffic through
proxy-servers.  To date, China has also sustained tight control.  Its
government's power is in one sense less than Singapore's because of
the country's vastly greater scale.  On the other hand, China's
technological infrastructure is far less sophisticated, and its
traditions are even less favourable to democracy than is the case
with Singapore.

Another way in which the net's freedom can be attacked is through
increasing governmental involvement in Internet governance, and a
concomitant drift away from the original pattern of voluntary
collaboration among individuals.  In such countries as the U.S.A. and
Australia, new governing bodies are being constituted as non-profit
organisations, with government exercising a great deal of influence,
but unable to exert complete control.  It is unlikely that citizens
in all countries will be so fortunate.  In the case of the key
organisation ICANN, the negotiation of constitution, constituencies,
representational arrangements, structure, process and policies have
proven to be very difficult and protracted (Bayers 2000).

The use of existing choke-points is likely to be supplemented by
attempts to create new ones, and the hierarchisation of the hitherto
relatively open and distributed net.  A recent example of such a
measure was pressure on the Internet Engineering task Force (IETF) to
adapt the Internet's architecture to facilitate traffic-monitoring
and content-monitoring.  To date, the IETF has been resistant to the
proposition, but this may well have been more for engineering reasons
than because of public policy considerations.

In the private sector, one all-too-apparent strand of the
counter-reformation has been activities designed to advantage
copyright-owning corporations.  These have included:
* appropriation of public information as private property of
corporations.  This has already occurred with statistical information
(Wigan 1992), geophysical data, and even the law (Wolfe 1994a).
Intellectual property regimes have been established in relation to
plant DNA, and there is a considerable risk that information about
the human genome will be propertised as well;
* claims by copyright-owning corporations, and suppliers to
them, that give the impression that copyright law provides rights to
the owner considerably broader than they actually do (e.g. Stefik
1997);
* increased contract-based publication, particularly through
subscription services.  Documents such as 'consultancy reports' are
priced so high that public and university libraries do not, and
realistically cannot, acquire copies.  This distribution process is
generally not accompanied by any arrangements to ensure that copies
become publicly available and archived, after the initial window of
sales opportunity has passed;
* extensions to copyright law such as the U.S. Digital
Millennium Copyright Act (DMCA) 1998, and similar statutes passed by
countries such as Canada and Australia following U.S. success in
convincing the World International Property Organisation (WIPO).
These have further protected the interests of the predominantly U.S.
publishing industries (Samuelson 1996).  The extensions criminalise
an excessively broad range of activities.  In particular:
- the technique of reverse engineering of software (which
enables a person to infer a program's design by examining its
behaviour) has been banned by the U.S. and by some governments
friendly to its position, despite it having long been a crucial
aspect of the innovation process;  and
- the manufacture and sale of devices whose predominant purpose
is deemed to be to circumvent copyright protections, has also been
criminalised, despite the importance of such tools for a variety of
other purposes (e.g. Samuelson 1999).

The implications of such manoeuvres were examined in Clarke (1999g)
and in Stalder (2001), and were argued to threaten freedom of
information flows, and create the risk of a new 'dark ages'.

Meanwhile, information technology suppliers have grasped
opportunities to further the interests of themselves and their
corporate clients.  One example is the distribution by software
vendors (primarily Microsoft) of versions of software whose features
include auto-reporting of unregistered Microsoft software.  Even
greater control over user patterns is being sought as part of
Microsoft's .Net initiative, by storing software centrally, rather
than on customers' own machines.

An especially insidious manoeuvre has been the adaptation of Internet
protocols to incorporate identifiers of various kinds.  Some of these
have been unique device-identifiers, which, for example, enable an
organisation to check which workstation or hand-held digital
assistant is being used.  Some kinds of devices (e.g. Ethernet
network interface cards, and Sun workstations) have always carried an
identifier.  A much more general-purpose id was recently proposed by
Intel, in the form of a Processor Serial Number.  This was dubbed Big
Brother Inside by its opponents, and it appears that it is not
currently documented on the company's web-site.

Another approach has been to use surreptitious techniques to force
the device or the client-software to disclose identifying
information.  Two common means whereby users' trust is abused are
cookies and web-bugs.

The most freedom-threatening initiatives of all have involved
measures to produce and profit from the surveillance nirvana of a
unique personal identifier.  Techniques that have been attempted to
date include:
* digital signatures and associated enforced identity
authentication and digital certificates (Clarke 2001g);
* multiple, successive manoeuvres by Microsoft, most recently
its Passport 'service';  and
* an alliance between the telecommunications industry and the
IETF to create a standard called ENUM.  The purpose is to establish a
single unique contact number for individuals, which represents not
merely a unique personal identifier, but a unique locating and
tracking tool at the same time (Falstrom 2000, Rutkowski 2001).

There is also the probability that market-determined directions that
the Internet takes will work against its hitherto liberating
influences,  Concerns include:
* the balkanisation of the Internet, as organisations carve off
segments for their own purposes, rather than them continuing to be
available as shared infrastructure;
* the gradual dominance of virtual private networks (VPNs)
running over the open public Internet.  This creates the risk that
the bandwidth available for the open communications component may
reduce, choking response-times;  and
* reduced investment in bandwidth, as a result of corporations
being unable to find business models that ensure return on
investment, and hence the failure of the infrastructure to sustain
the traffic.

There is no guarantee that the underlying protocols (especially TCP,
IP and DNS), and the processes whereby they are maintained, will
survive in their current form.  A revised version of the IP protocol,
IPv6, is now operational in what is usually referred to as Internet 2
(Montfort 1999), but whether this will be retro-fitted to the open,
public Internet is not yet certain.  Moreover, its impacts if it is
widely implemented are far from clear.

It would be entirely feasible, for example, to adapt Internet
architecture to support broadcast.  The experimental MBone protocol
did just this for radio transmissions.  The risk is that powerful
media organisations, supported by advertisers trying to sustain the
old-fashioned direct marketing way of life that broadcast media
created, may force change in Internet architecture that undermines
interactivity in order to enable broadcast-mode transmissions.

Finally, if the net continues to avoid attempts to control it,
governments and corporations may, in collaboration, invent a new set
of protocols.  These might severely restrict the freedoms that
threaten the powerful institutions, and force traffic onto new
networks that implement the prospective, new, power-friendly
protocols, resulting in the collapse of the Internet as it has been
known since the early 1990s.

There has been speculation that Microsoft's latest attempt to take
advantage of its market-dominance, in the form of Windows XP and
related products such as Passport and Hailstorm, may signal a move to
adapt the Internet Protocol (IP) layer to suit its own ends and those
of its business partners, such as the U.S. and U.K. governments.  See
Procomp(2001), EPIC (2001b), and Cringely (2001).

Like the library in Eco's 'The Name of the Rose', the future
information infrastructure may be devised to deny access to
information rather than to enable it.

4. The Scope for Anti-Authoritarian Countermeasures

Will electronic technologies revert to a virtual form of the
centralist notion, and thereby hand victory to the authoritarianism
of the powerful?  The tension was summed up at the very beginning of
the open, public Internet era by the caption "It's the FBIs, NSAs,
and Equifaxes of the world versus a swelling movement of Cypherpunks
, civil libertarians, and millionaire hackers.  At stake: Whether
privacy will exist in the 21st century" (Levy 1993).

One approach is for the public to use the freedoms that are
available, before it's too late.  This means activism, and organised
movements to discredit, undermine and defeat manoeuvres of an
authoritarian nature.  Advice on the techniques and tools of virtual
activism is provided by sites such as Net Action.

These need to focus on key elements of the new economy and society,
such as open source, open content, freedom of access to cryptography,
and P2P technologies.  A specific movement is The Public Library of
Science, which is seeking to recapture scientific publications from
the grasp of publishing companies.  See also Harnad (2001).

This is in part a question of political economy;  but technology may
play a hand.  There is already a category of tools commonly referred
to as privacy-enhancing technologies (PETs).  One fundamental example
is the use of cryptography to protect the content of messages while
they are in transit, and the content of computer-based files.

A much more vital category of PET comprises tools that enable
anonymity, taking advantage of features and potentials within
existing infrastructure.  They were anticipated in literature
(especially Brunner 1975), and occupy the spare time of many
fringe-dwellers on the net;  but some tools are being developed by
large, respectable corporations.  See IPCR (1995), Clarke (1999d),
Kling et al. (1999) and EPIC (2001a).  These tools appear to have
caused the U.S. Government sufficient concern that it has gained a
patent for a key technique whereby the identities of message-senders
and receivers can be protected (US Patent 6,266,704, Onion Routing,
of 24 July 2001).  It is to be expected that this will be utilised in
order to deny corporations the right to implement the technique.

There is a need for a concept broader than PETs, such as
freedom-enhancing technologies (FrETs?).  Examples that already exist
are Freenet, and the other 'peer-to-peer' (P2P) arrangements, which
subvert the intended central control of digital resources and enable
them to be shared around the community.  See Clarke (2000d) and Wired
(2001)', but also Clarke (2001a).

Although large corporations have generally been slow to perceive
business opportunities in privacy-enhancing and freedom-enhancing
technologies, many small, motivated, risk-taking companies have been
very active, combining religious zeal for liberty with the desire to
apply venture capitalists' money to socially worthwhile undertakings.

The business sector is likely to be an ally, at least for short
periods of time, for another reason as well.  The measures being
passed into law entrench the power of existing large corporations.
In doing so, they stifle not only 'theft' and 'the enablement of
theft', but also 'innovation'.  This anti-competitive dimension will
attract the opprobrium of innovative corporations whose own
activities are challenged by the restrictive laws.  Gradually the
dynamic growth companies are likely to look more attractive to
politicians than the dinosaur publishing houses;  and the
freedom-constraining laws will at first be less enthusiastically
applied, then ignored, and finally washed out of the statute-books by
next-generation laws.

A further possibility is maverick States (which in the near future
might be as well defined as virtual spaces as geographic places).
There are two reasons for the survival of maverick States (as
presaged, once again, by Brunner).  One is that some people manage to
find cracks in any authoritarian edifice, and the powerful are too
busy to commit resources to addressing the flea-bites, or choose not
to do so (in particular, because members of the elite use the same
cracks).  The other is that one element of the calculus of
authoritarianism is the perception of an enemy, and maverick nations
fulfil a role as that which needs to be hated.

There remains another alternative, which might be dubbed 'the
American way'.  In The New York Times of Sunday December 8, 1996,
Thomas Friedman coined the epithet that 'no two countries that both
have a McDonald's have ever fought a war against each other'.  This
is a much-travelled and not altogether supportable hypothesis (and
since the NATO bombing of Yugoslavia in 1999, Friedman's attention
has transferred to Starbucks).  Nonetheless, there is a real chance
that increasing prosperity and economic interdependence may make both
armed conflict and authoritarianism less common.  Applied to the
Internet, the proposition is that "by building a state-of-the-art
Internet, Beijing may have set an irreversible course", away from
authoritarianism and towards some form of democracy, and presumably
of capitalism as well (Sheff 2001).

Europeans are likely to baulk at the American way's dependence on
profit as the sole arbiter of good judgement.  A gentler argument
might be based less onpositivist economics and more on political
economics and even social ethics.  For example, as was the case with
the copying of films using VCRs, the present 'copyright
owner-supremacist' positions being afforded to publishers might very
rapidly come to be seen as dysfunctional, because they place serious
constraints on innovation.  If so, each new set of anti-authoritarian
measures might need only to gain enough momentum to become credible,
in order that the tide might be turned, and authoritarian measures
rescinded.

This author's misgivings about whether anti-authoritarian measures
will be adequate are not shared by John Perry Barlow.  He reflected
the ebullience of American libertarianism when he confidently
declared not only that P2P could not be defeated by the RIAA and its
friends in the U.S. government, but also that "We've won the
revolution. It's all over but the litigation. While that drags on,
it's time to start building the new economic models that will replace
what came before" (Barlow 2000).

This underlines a further weakness in the analyses of Barlow and his
fellow EFF Directors, and of Lessig (1999).  Their US-centrism and
inadequate appreciation of international differences leads them to
assume that the dynamic balance that exists in the U.S.A. will be
reflected in other countries.  But the U.S. Constitution includes a
Bill of Rights, whereas few other countries have such a substantial
countervailing power against the inevitable drift towards
authoritarianism.  Whether or not they will work in the U.S.A., the
solutions that Barlow and Lessig champion simply will not work for
the perhaps 50% of Internet activity, and 95% of the world's
population, that live elsewhere.

5. Conclusions and Prognosis

Idealistically, one could call for a wider appreciation of the need
for balance between the interests of law and order, on the one hand,
and freedoms on the other;  and for non-deployment of excessively
threatening technologies.  Unfortunately, the power lies
substantially in the hands of the law-and-order community, and
balance is a forlorn hope, unless very substantial countervailing
power is somehow brought to bear.

Another means of achieving balance would be to invest significant
resources into the development of tools for pseudonymity.  Such
technology would lie between anonymity (and its scope for
accountability to be undermined by the absence of any means of
applying retribution to guilty parties), and ubiquitous
identification (which renders everyone's behaviour subject to
transparency, and their locations subject to real-time discovery,
resulting in the chilling of non-conformist behaviour).

Far from the halcyon days that John Perry Barlow foresaw, there
exists a real prospect of a new Dark Ages, arising from a successful
counter-reformation driven by partnerships of governments and
corporations, spear-headed by the self-appointed policeman of the
free world.

This first decade of the new century will see critical battles fought
between the old guard and the new.  Networks are only one of the
battlegrounds.  Technologies of repression are rampant, including the
embedment of person location and tracking in consumer-carried
devices, the imposition of general-purpose identification schemes,
and the uncontrolled application of biometrics.  Globalisation is
seeing the wane of the power of nation-states, and increasing
dominance of the polity by corporations.

Will the forces of authority, sponsored by corporations, win these
battles?  If so, the future has already been largely foretold by the
anti-utopian novelists (in particular, Gibson 1984), and their
accounts merely need to be updated.

Will consumers and citizens create countervailing power against the
authority of the government-corporation complex?  If so, they'd
better hurry.

One hope for freedom may be the conflicts of interest within and
between governments, within and between corporations, and between
governments and corporations.  People may need to encourage the
ongoing divisions among the powerful players, in order to provide
sufficient scope for freedoms to survive.

A key theatre will be the information infrastructure.  If the
architecture is adapted to support surveillance, then
authoritarianism has largely won the war.  Resistance is being led by
people with technical capabilities (often referred to, disparagingly,
as 'nerds');  but the strong likelihood is that the ravages of the
authorities will stimulate more effective coalitions between
politically-minded and technically-oriented groups.  Our desire to
live in interesting times seems very likely to be fulfilled.

--
Roger Clarke              http://www.anu.edu.au/people/Roger.Clarke/

Xamax Consultancy Pty Ltd, 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
                 Tel: +61 2 6288 1472, and 6288 6916
mailto:Roger.Clarke@xamax.com.au            http://www.xamax.com.au/

Visiting Fellow                       Department of Computer Science
The Australian National University     Canberra  ACT  0200 AUSTRALIA
Information Sciences Building Room 211       Tel:  +61  2  6125 3666




_______________________________________________
Nettime-bold mailing list
Nettime-bold@nettime.org
http://www.nettime.org/cgi-bin/mailman/listinfo/nettime-bold