Antonio Machuco Rosa on Fri, 29 Nov 2002 21:28:41 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Memory and Control in the Architecture of the Internet



via geert lovink <geert@xs4all.nl>

(translation of a lecture held at the Oeiras conference on surveillance
(near Lisbon), September 28, 2002. Posted to nettime with permission of
the author. /geert)

Memory and Control in the Architecture of the Internet António Machuco
Rosa

The historical origins of the Internet.

>From packet switching to TCP/IP

The theoretical frame of classic rationality involves a well precise
concept of control, whose characteristics were initially approached by
physics and later became a paradigm for various disciplines of knowledge.  
One of these characteristics consists on the construction of models that
assure total predictability of future events, this made possible when a
global representation of a system can go local, meaning, the calculation
of the system s state in each point of its evolution. In its strict
formulations, this situation s called the principle of determinism, which
holds, effectively, a kind of omniscient state: the local structure is
deduced a priori from the global structure. In artificial systems, in
technology, the principle of determinism is conjugated with the principles
of global planning, modular and linear deconstruction of the technological
artefact, in a way that the interaction between its parts is as scarce as
possible, allowing for the prediction of its behaviour. These strategies
of technological design can be designated as centred, and they involve
certain central entities who are globally responsible for the functioning
of the artefact (ef. Machuco Rosa, 2002, for a more detailed analysis). As
we shall see, a clear example of this design strategy is given by the
modern digital computer, based on the existence of a CPU, a central clock
and a clear separation between data and instructions.

  If the architecture of the modern digital computer represents a centred
design strategy, it was an opposite design principle, the non - centred
design, that, historically, was in the origin of modern computer networks,
and in particular of the very first, Arpanet. This kind of design is
partly recovered by what was designated by self organization: a principle
based on the idea according to which the elements of a system interact
only locally, resulting in an emergent order whose formation does not
depend on any planning and globally organizing centre. In the case of the
first computer network, we can determine two levels within its
architecture, a distinction that is crucial for the general problem of
control and surveillance. From the historical point of view, these two
levels can be identified as the two investigation teams that contributed
for the original architecture of computer networks, sharing the fact that
both were influenced by the cybernetic movement (cf. Dupuy, 1994, for a
history of the cybernetic movement). In it one finds a tension between the
centred and the non centred principles of design, but it is clear that it
was cybernetics the first to theorize what we call non-centred principles.

  The first of the two mentioned investigation teams is practically
reduced to one person, Paul Baran. Working for RAND in the early sixties,
Baron created packet-switching, that would become the algorithm used for
the circulation of information in computer networks. This is a local and
non-centred algorithm. As Baran mentions, (Baran, 1960), it is an
algorithm that allows a commutation network to implement a policy of
self-learning (&) without the need for a vulnerable central point of
control . When inventing packet-switching, Baran was guided by the general
classification of systems. He opposed the traditional principles of
commutation based on the existence of a central commutation point through
which all information must pass to the principles of packet switching.

  Packet-switching is an algorithm according to which the network knots
send information to nearby knots, and so on, until the final destination,
without any knot gaining knowledge about the network s global structure.  
Without going into details (cf. Baran, 1960), this is possible because a
knot is able to direct information due to the knowledge it possesses
concerning the structure of a network s fragment. This knowledge was
generated by messages previously sent by other knots, to which it
responds, sending back information. In other terms, if a given knot, A,
receives a message from an intermediary knot, B, originated in a knot C,
this path is memorized, therefore if A then receives a message from a knot
D destined for knot C, it knows that it should send it to B in order for
this one to send it to C. Apart from that, packet-switching breaks the
message into several blocks; it is fragmented and doesn t travel as a
whole. Therefore, in this concept, information is transmitted locally in a
distributed and redundant way. Each knot is equal to any other, without
any central point that, controls traffic according to origin and
destination with global knowledge of the network.

  We will later see the importance that the kind of ideas implemented in
packet-switching had, concerning the problem of surveillance on the
Internet, as well as the importance they currently have regained. For the
moment, it matters to identify the historical origins of computer networks
in a more precise way, showing that their effective implementation
implicated the juxtaposition on the algorithm of information transmission
of an architecture that, perhaps necessarily, will open up the possibility
of partly eliminating its distributed nature. In fact it wasn t Baran but
a team of investigators from ARPA (defence connected research government
agency) who effectively implemented the first computer network, Arpanet.
 From them, stood out, in an early stage, J. Lickilider and C. Taylor,
also strongly influenced by cybernetics (ef. Machuco Rosa, 1998). The main
reason that led to the implementation of Arpanet had nothing to do with a
supposed soviet nuclear attack, but with the need to save money (cf.
Hafner & Mathew, 1996): in a time when computation was financially very
expensive, came the idea of sharing computational resources of a machine
with others, meaning that the concept of shared computation was in the
origin of Arpanet. This is however, a distributed computation that
operates on a different level from packet-switching: here, we deal with
the transmission of information, whereas there, we are dealing with
distributed sharing of computational memory. In order to fulfil this
objective, network architecture traced the separation between computers
working as points of commutation (the current routers) and host computers
(approximately the current servers). This architecture was also conceived
according to non-centred principles, for not only routers, but also all
hosts were alike, (peer computers), not existing any with centralizing
tasks. However, as we will see, this type of architecture involves the
possibility of evolving in a different direction from what was initially
intended, which will have important consequences from the control point of
view.

  It is true that control, on early stages of computer networks
development, was minimum, which is made clear by identifying the emergence
of what is now known as the Internet. The Internet is, in fact, a network
of networks that interconnected among themselves through the adoption of
common communication protocols. Each of the sub-networks of the
meta-network Internet also didn t have any dominant position in relation
to any other to start with. In fact, many of these sub-networks had
specific architectures that prevented them from communicating, when a
capital event took place, the creation and later adoption of the standard
TCP protocol, later called TCP/IP (Transmission Control Protocol / Intenet
Protocol) in 1973. This is a remarkable creation on different levels.

  Firstly because it is a public and open standard. Public in the sense
that it is not a proprietor s standard, owned by the state or by any
corporation or individual. In L. Lessig s words (Lessig, 2002), TCP/IP
belongs to the commons, to this public space that precedes the
public/private distinction in the State/private sense. It is also public
in the sense that its code source is open or accessible to anyone.
Concerning its function, TCP/IP is responsible for the reliable
transmission of packages, according to packet-switching, for it is based,
as we saw, on a principle of distributed circulation of information.

Secondly, TCP/IP has as main characteristic to satisfy a principle of
neutrality, also called end-to-end principle. The adoption of this
principle was undoubtedly greatly responsible for the explosive growth of
the internet, and it means that TCP/IP is neutral, indifferent in relation
to any application that runs over it. It cannot distinguish them and is
totally blind in relation to any kind of specific contents of each
application: for example, it does not tell an e-mail from a web page, an
MP3 file or any other application travelling in the network of networks,
the Internet. This protocol only sends the packages from end to end
according to the information s IP address of origin and destination, in a
reliable way. The packages distinction according to their content is left
for the networks knots, the servers and computers in general where
applications reside. From this comes a fundamental consequence: any device
of control and surveillance will not be exercised at TCP/IP level. Such a
fact must not hide an important point: when TCP/IP designers conceived
this protocol there was nothing that said that it should process
information in a distributed way, nor that it fully respected the
principle of neutrality.  The designers could have introduced
specifications that violated this principle. For instance, by
distinguishing file formats, this is, introducing routines in the software
that could filter a certain kind of information. For this it would be
enough that the circuit was not neutral and made the access points comply
with certain specifications. If this didn t happen, it was because the
network s pioneers didn t want it so. A standard such as TCP/IP
illustrates a major aspect of information technologies: from the values
point of view that they involve privacy and possibility of surveillance,
for instance these technologies don t possess any essence, they don t
convey in an essential way this or that opposed idea. The ideas they can
effectively come to convey depend on the architecture that we decide to
impose on the code-source that rules them (Lessig, 1999). What is written
on the code-source (filtering or not filtering information, for example)
is a free decision, conveying values, that does not depend on any
technological automatism. TCP/IP s architecture was created in a way to
assure effective intercommunication of several computer networks, which
had as non-intentional consequence the fact that the possibility of
control is, at its level, minimum. However, as we shall see, this doesn t
mean, that control cannot be practiced on another level, in the networks
knots. The level at which control is operated can be identified more
clearly if we quickly go over a set of ideas sustaining that the Internet
intrinsically holds a liberating value, preventing, also intrinsically,
any form of control and surveillance.



Electronic Frontier Foundation and surveillance

These types of ideas were often diffused during the past decade by one of
the first and main organizations of online activists, EFF (Electronic
Frontier Foundation), among others. In general terms EFF defends the model
of an Open Platform that is a global communication infra-structure
offering indiscriminate access, based on open, private standards, free
from asphyxiating regulation (In www.eff.org).

Open means that the code-source should be publicly accessible, and private
is not equivalent to proprietor, but rather public in the previously
mentioned sense of commons. TCP/IP has these characteristics, which assure
a certain visibility and transparency that undoubtedly - and we will come
back to this point can prevent certain camouflaged forms of control and
surveillance. However this point of view, that makes human freedom and
social regulation intervene, is next prolonged in a technological
automatism. As one of the founding members of EFF, John Gilmore, mentions:  
the electronic world, conceived to resist a nuclear attack, can equally be
indifferent to government regulation. Due to its global range and its
decentralized design it cannot be controlled by any police.

There are two arguments here: the first states that the Internet could not
be watched because it is a global network not based on physical
interactions. This is an argument with some credit, but one surely cannot
conclude that the Internet, due to being global, cannot be in any way
watchable. The second interests us more and states that the base
architecture of the Internet would make the network intrinsically
impossible of being watched. More precisely, Gilmore implicitly refers
that this is the architecture of TCP/IP and of the non-centred algorithm
that this protocol implements, packet-switching. The control, the
surveillance would become impossible due to the intrinsic nature of the
technology in question. The idea consists on the fact that a non-centred
structure would assure anonymity on its own.

However it is false to claim that the Internet has a supposed essence that
makes it uncontrollable and impossible of being patrolled. To visualize it
better, it is necessary to mention some aspects of the computer meta
network s architecture. TCP/IP is a neutral protocol in the sense that was
described in the previous section. But this doesn t imply that later
protocols situated above it have necessarily to be so. They can perfectly
introduce specifications that allow surveillance. Let us remember that
above TCP/IP other protocols are running, like MINE (Multipurpose Internet
Mail), SMPT (Simple Mail Transfer Protocol), FTP (File Transfer Protocol)  
and HTTP (Hypertext Transfer Protocol). These protocols share the original
vision of the Internet in the sense that they are public standards. But
nothing prevents that different proposals for its extension incorporate
surveillance and control mechanisms, or that new protocols incorporate
them from scratch. A good example is PICS[1]. PICS aims at being a
standard for the Internet and its objective is to allow an ordered
filtering and classification of information in a way as to block access to
certain sites (pornographic sites were the project s initial objective).
Only this same technique can perfectly be used to filter any kind of
information. PICS is therefore a good example of code-source with the
ambition to become a standard and it illustrates the fact that information
technologies in general involve values, being however indeterminate in
relation to specific values. In this way, as Lessig (Lessig, 1999)
mentioned, the Internet has nothing essential , necessary . There is no
technological automatism that guarantees the absence of control.

A protocol such as PICS doesn t fully respect the principle of neutrality.  
And we could perfectly imagine any other protocol that made all the knots
in the network to comply with their specifications. This means that
surveillance in the network will have the tendency to operate at knot
level where the applications reside, being these the ones that introduce
the specifications that assure control, resulting in a greater distinction
between the two types of computers present at Arpanet s initial
architecture: the distinction between routers and hosts. As we shall see,
this distinction has not ceased to increase, but for the moment it matters
to point out that it introduces the distinction between circulation and
memorization of the information. If control can hardly be done at
circulation level, it can naturally occur at Internet s memory structure
level, and one can refute that, if circulation is distributed, memory
architecture has evolved into (more and more) centralized strategies. In
this context, it is useful to briefly recall some characteristics of
memorization devices.



Memory and information storage

The structure of human memory is still greatly unknown. However, for a
long time modular conceptions of memory were advanced. (cf. Machuco Rosa,
2002, Chapter I). Caricaturing a bit, those conceptions sustain that
memorized contents are entirely located in relatively precise modules, and
that the access to these contents is carried out by a central entity a
homunculus who would hold the physical address of this place and thus
would update the stored memory in a facts base . Even being a caricature,
this structure is not too different from the architecture of the modern
digital computer:  there is a RAM memory, where data and instructions are
located, and a CPU that, among other things, locates these data by
physical address and then executes them. Expanded, this structure can be
found on certain programs of Classic Artificial Intelligence (Machuco
Rosa, 2002, Chapter I), where contents are also physically located and
resend, through pointers, to other addresses where other contents are also
residing in memory. Applied to great masses of data, this structure allows
building potentially huge relational databases allowing for an enormous
capacity to inventory, discriminate and control information. And it does
so automatically. It seems to be part of digital information s nature to
reside in memory and be related automatically.

  With the due modifications, the mentiond structure was in part
incorporated in the architecture of computer networks. Here information is
also constantly stored in servers. It usually even is automatically stored
in the server s access cache to the Internet and/or in the cache of the PC
itself. This memory architecture involves, as its main possibility, the
automatic relation of stored information, and such a possibility has
effectively been constantly implemented. In truth, it is perfectly known
that many of the main appliances for computer networks have precisely the
function of correlating data. This is what happens with search engines or
with the numerous programs used by corporations to draw the user / client
s profile. Identification is also physical, or more precisely, each
machine can be accessed and identified from its IP number. Using adequate
surveillance of an IP address, the information s content may not only be
known, but also sent to any central entity. Certain central entities,
probably using automatic mechanisms, could equally examine the mail s
content. As it was mentioned before, there is nothing that, in principle,
prevents certain standards from incorporating the most various types of
surveillance devices. Therefore, if control and surveillance is not made
on information circulation level, it can perfectly be carried out in the
network s knots: either on the servers or on an individual user s PC
itself. The information is memorized there more or less temporarily and,
even remotely, it is possible to keep an eye on many of the actions done
by a user.

Therefore, and as it was pointed out many times, the relational nature of
information allows surveillance and control activities, unimaginable with
the use of any other technology. This possibility largely takes roots on
the architecture of memory that has just been described and that is
typical of information. The question that is then placed is to know
whether there are mechanisms, architectures, policies that allow to
counterpart the possibilities of surveillance involved in the concept of
digital information. There are at least three of these mechanisms and
architectures: cryptography, open code-source and computation between peer
computers.


Cryptography

The base concept of digital information cryptography, the concept of
public key, appeared amidst the opposition between centred and non-centred
systems. It was created in 1975 by W.Diffie motivated by ensuring the
effective privacy of computer data. At least outside the military sector,
protection of data was at the time assured by passwords managed by
computer systems administrators. As was said by S. Levy, this type of
protection represents an approach from top to bottom to the problem of
data privacy [2]. The non-centred approach would instead consist on two
individuals communicating without the need for an intermediary, or else on
the existence of a common private key that they would have to transmit to
each other. This final approach has become more and more diffused.

The concept of public key is based on the existence of two types of keys
possessed by each individual: a private key and a public key, the first
remaining secret, while the latter can be accessed by anyone. Without
going into subjacent mathematical algorithms, the idea consists on an
individual being able to send ciphered information to another, using the
destination s public key, while this is the only one with the ability to
decipher the message using the correspondent private key. This way, if A
wants to sent a ciphered message to B, he will search for B s public key
and use it to cipher the message. Receiving the message, B uses his
private key to decipher it, using a mathematical process that establishes
a correspondence between B s public and private keys ensuring that only he
can decipher any message that has been ciphered with his public key.
Obviously if B wants to reply to A, then the process is inversed: B
searches for A s public key and uses it to cipher the message that will be
deciphered by A using the private key that corresponds to his public key.
Based on this idea, it is possible to code messages with an extremely high
level of inviolability, the question being whether it is possible to
diffuse software that is adequate to that level of security[3]. Besides,
the concept of digital certification has been recently diffused, which
aims at assuring that it was in fact A (and not someone else) who used B s
public key in order to send him a message.

The use of this kind of technology as an ideological flag generated a
movement inside the broader movement of cyber culture, the so called
cyberpunks, who have in Tim May, author of The Crypto Anarchist Manifesto
and of Black Net, one of their leading representatives. In his
perspective, any individual s accessibility to sophisticated instruments
of cryptography shouldn t only serve the purpose of ensuring the privacy
of the data people constantly leave behind going through electronic
networks. On the contrary, cryptography should have a wider social effect,
it should change in a fundamental way the nature of corporations and
government interference in economic transactions (May, 1996). Still
according to May, crypto-anarchy is inevitable; even if there is no chance
of it being implemented by politicians , it will be implemented by
technology itself, which is already happening . In fact, political powers
have been gradually authorizing the use of sophisticated cryptography[4].
But this doesn t mean that ciphering is a technology that automatically
assures total privacy. In fact, as said before, cryptography should be
complemented by digital certificates, which have to be issued by a third
party, hired by the individual. On the other hand, proposals suggesting
that individuals would be complied to give a copy of their key to a
central entity the State are recurrent. In fact, since the utopia of a
cybernetic space completely outside the Nation/State is over, there is no
reason why the State could not formulate such a demand, when similar ones
are made concerning everyday activities. If so, and naturally without
minimizing the importance of cryptography as protector of privacy,
regulation ends up situating at State level, traditionally seen as a
constituting the frame of a public and democratic space of discussion and
decision.



Open Source

The open source movement defends that, in principle, software applications
should be accompanied by their code-source, and that it can be modified
under the condition that it must again be made available within the
software s public space. This is, at least, the movement s philosophy such
as it was theorized by its first great defender, Richard Stalmann[5]. It s
a thesis whose credits are difficult to evaluate and whose application is
limited by various factors. However there are two other conceptual
subjects raised by the open source movement that seem to be crucial.

  The first illustrates how open source software development is often
processed according to decentralized mechanisms, similar to the ones that
caused the appearing of some Internet networks. LINUX operative system is
perhaps the best well-known example. It began being developed in 1991 by
Linus Torvalds and rapidly, first hundreds, and then thousands of
programmers all around the world, using the Internet, started cooperating
in writing its code. The story of the project was well depicted by one of
the main mentors of the open source movement, Eric Raimond (Raimond,
1999), pointing out the way LINUX system has acquired more and more
functions and stability in a process he compares to a bazaar cacophony ,
this is, through the efforts of a very large number of people accompanied
by a very small central of coordination. It is a programming strategy
opposed to large programs with proprietor s code-source, and that,
however, apparently a miracle, not only works, but produces extremely
strong systems.

The public nature of code-source places another conceptual question that
goes beyond the supposed efficiency gained by non-centred design
strategies. It concerns the issue of standards, which gives a new
perspective over the problem of surveillance on computer networks.

The decisive importance of standards in information technologies is clear
for some time, but the recent case Microsoft / USA has stressed it even
more. We already pointed out the importance that the existence of
standards or open protocols had on the evolution of the Internet. On the
other hand, it is known that, for instance, for the PC there is an
operative system, Windows, that is not an open standard (the code-source
is not public) and has acquired a position of monopoly. The reason for
this last fact is also known, having been systematically theorized for the
first time by Brian Arthur (Arthur, 1994), who demonstrated that
information technologies are characterized by the existence of growing
scale resources. Without going into technical details, the main reason for
the existence of growing scale resources lies on the fact that information
technologies often demand strong external input, which means that the
bigger the number of users of a certain product, platform, etc., the
better the incentive for additional users to appear, in a motion that
leads the product in question to a dominant (monopoly) market position.
Summing-up, in the Windows case, this mechanism of positive retroaction
consists on the larger the number of programmers developing specific
applications for Windows, the bigger the incentive for users to join that
platform, which in turn generates new incentives for new applications
development, which attracts more users, and so forth, with the results we
all know.

One can argue, even without being an adept of advertising all type of
code-source, that programs working as standards and over which numerous
other applications will run, meaning, the programs in network susceptible
of strong external input, should have special treatment: they should be
public, same as most standards that exist on the infrastructures of the
physical world (from roads to electrical sockets) are public. One reason
for this point of view is that information technologies public standards
favour innovation in a decisive way, unlike what tends to happen with
private standards. This was precisely what happened with the Internet,
based on public open standards, and one can argue that attempts, like
Microsoft s, to impose standards on computer networks will lead to a
reduction of the level of innovation (Lessig, 2001).

But there is another argument that favours open standards. It concerns,
precisely, privacy and surveillance and confirms the importance of the
open standard movement s ideas. In fact, there is an increasing number of
software mechanisms that make the Internet extremely vulnerable to control
strategies, privacy invasion, freedom of speech limitation, etc. Routines
that exist in common programs like browsers filter, catalogue and select
information, often in a non-visible way to the eyes of the user. Thus, the
advantage of open code-source consists on making those mechanisms visible.  
This means that, on open programs, any mechanism of control will be
entirely exposed, and since one has the code-source it can be immediately
modified in a way that eliminates the undesired programming module. Public
standards share a certain ideal of transparency that has, precisely, the
effect of assuring privacy and individual autonomy. In other terms, what
is public and shared by all can t harm each one s freedom and
individuality.  So, the concept of open code-source implies that
surveillance regulation is implemented within the commons public space.



WWW and P2P

In the previous section we saw that the concept of distributed computation
was present in the mind of the early computer network architects. We also
saw how the networks evolution lead to the appearance of various forms of
centralization. This is, in particular, the case of the World Wide Web.
The WWW is a network based on the asymmetric and tendentiously centred
server/client mechanism. Like all others, this model was chosen, defined
as a characteristic of network architecture, despite being contrary to its
maker s, Tim Berners-Lee (Berners-Lee, 2000), original intentions. The
server/client model is in fact a centralized model, locally centralized,
in the sense that any request of information can be seen as a relation
between a server and a client, relation that exists independently from all
other requests.

Locally, the server/client model is a relation between a centre and a
periphery client of that centre. It has been imposed from the exterior as
architecture design, but implies, through the concept of link, a global
consequence, not explicitly imposed on architecture design. More exactly,
it unintentionally involves the appearance of a global centre, not only
local, with all the control related consequences that will follow. What
happens if, instead of considering every request from a client to a server
as independent from all the others, we analyze the global structure that
results from the various local interactions, based on tracing a link,
meaning, what structure results from the clients multiple requests to
servers? There is a consequence not intentionally inscribed in the model
server/client, which is a new form of centralization emerging from link
structure of the WWW. In fact it is possible to demonstrate theoretically
(Barabási, Reka and Jeong, 1999), and acknowledge empirically, that the
WWW displays scales invariance, or in other words, there is a relatively
small amount of sites that are pointed by a vast number of links, and a
vast number of sites pointed by a small number of links, a phenomenon
whose cause apparently lies on a connection preference propriety: new
links are more likely to point to sites which many other links point to
already.  Therefore, there is an emergence of sites that will tend to be
larger and larger (more paths in their direction), and the bigger they
become they bigger they will get.  Therefore, Information will tend to
position itself in the direction of more visible sites. It is a mechanism
of positive retroaction with the inevitable consequence of the existence
of a small number of sites with enormous density of connections and an
enormous number of sites with weak connections density. One must not
underestimate the importance of that fact, not only because of the
existence of central points that make the net extremely vulnerable to
attacks that may deregulate it, but also due to the possibility of
gigantic information storage centres appearing, with the subsequent
control and surveillance consequences[6]. The existence of these centres
was not imposed as a system design principle; they emerged. We will see
how this kind of emergence can be again met in the peer-to-peer
computation models (P2P) that we now present.

  Around the late nineties, the computation model designated by P2P
started to get a lot of attention, which restores the initial idea of
distributed computation defended by some of the Arpanet founders as well
as the concept subjacent to packet-switching. Reasons like the PC s unused
computational capabilities were factors that lead to the development of
P2P[7]. But other reasons are equally related to a new attempt to
counterbalance the potential of control and surveillance that computer
networks have, in particular when they are based on the server/client
model. These socio-political motives were the inspiration guide of one
among the several P2P projects, Freenet, created by Ian Clark
(http://freenet.sourceforge.net/).

The main objective of the adaptive network Freenet is to assure total
anonymity, trying to eliminate to a minimum the possibilities of
surveillance that are implicit in the architecture of the computer
networks that make up the Internet. Its guiding idea is similar to the one
defended by the EFF: the idea according to which non-centred technologies
assure anonymity and privacy. However, and contrary to the opinion
sometimes sustained by members of the EFF, Ian Clark points out that
non-centred technologies don t characterize the architecture of the
current Internet, therefore it is necessary to create a new network that
definitely satisfies the non-centred design principles, breaking away
completely from the server/client model. This new network is the Freenet,
and in it each computer is a peer, meaning, all computers are equal, none
centralizes the network, both at circulation level and information storage
level. Besides, the principle of locality is respected, for each network s
computer/knot only has the information concerning the network knots in its
neighbourhood.  This is the philosophy of non-centred systems: each knot
only has a local vision, therefore any global representation of the
network is completely beyond their access. Instead of qualifying his
network as non-centred, Ian Clark, conveying this way the project s
ideological motivation, mentions that the Freenet is a perfect anarchy
(Clark, 2000).

Since the network seeks to be an information retrieval device, one could
think that, somehow, there would have to be something similar to a server
where a client obtains the desired information. However, and on the
contrary, the network s architecture imposes that every computer is,
indistinctively and at each moment, both server and client . More
specifically and without going into technical details, the Freenet allows
the attainment of memorized information in the following way (Clark,
1999):  computer A initiates an information request (to which corresponds
a certain key). Due to the past history of interactions, it sends that
request to one of the computers, say B that, with a certain degree of
probability, has that information. B possesses or not the information in
question. If it does, it sends it to A. If it doesn t, then it sends a
request to its neighbour computers, let s say C that, with a certain
probability, has the requested information. One should note that if the
client was initially A, it then became B, and then C. Let us suppose that
information is finally found on a computer D. D then resends the
information through the intermediary knots that had been previously
crossed (each of them was a client in the opposite direction, and is now a
server ) until it reaches A, the original source of the request.

One characteristic of Freenet s architecture lies on the fact that the
information returning from D to A is stored in cache, not only in D,
(where it was found) and in A (where it will stay), but also in C and in
B.  Therefore, on one hand, each knot is indistinctively a client and a
server , on the other hand, information is constantly being duplicated in
the network, this is, we are approaching, even if not in the complete
sense of the term, an architecture of distributed information. This
absence of any central point, of any server , immediately assures a high
level of anonymity, therefore any request originated in a certain knot is
replicated by the numerous intermediary knots where it may pass through,
the same way as memory is constantly being multiplied times n, according
to the n number of intermediary knots. Therefore, in principle, it is not
possible to identify the actual source of a request. It is surely not
possible to identify the requisition original source because, according to
our illustration, it is impossible to determine whether computer A was
originating a requisition or simply passing on a requisition originated in
some other knot in the network.

  We can now better understand how a computer finds the information in one
of its neighbours. With the propagation in memory of requisitions and of
information in cache, the density of connections grows progressively. Now
it is a generic outcome in the self-organization theories that, if that
density crosses certain critical thresholds, the probability of finding
the requested information through a relatively short path converges
rapidly towards 1 (for instance, Bundle and Havlin, 1995).

This result shows that the required information can be found through a
relatively short path and only based on the network s local structure. The
thing is that this result can be seen from a completely different
perspective whose possibility Clark doesn t seem to have noticed that goes
against the equalitarian philosophy subjacent to the project and that also
underlines the typical ambiguity of information technologies. Obviously
the memory stored in peer computers cache cannot grow indefinitely, having
Ian Clark implemented an algorithm in the network that establishes credits
of information removal. Looking closely at that algorithm, one can verify
that it translates the apparent obvious solution: the criteria of removal
will be determined by a principle of popularity. This way, the last
requisition received by a peer computer goes to the top of the memory,
while the previous ones go down one notch. Since every requisition has a
key identifying the requested information, the inevitable consequence is
that the most frequent requests, that is, those who request the same
information, will tend to remain on top, while the less popular subjects
will be progressively removed from the top until their complete exclusion
(no one requests them, therefore they don t show on the first levels).
This is a totally similar mathematical result to the one that leads to the
formation of sites centred on the WWW. In reality it translates an also
generic result in the processes of self organization in a positive
retroaction regime: the bigger the value of a connection, the larger the
attraction is for future users , in a process that leads to the
disappearance of possible alternatives. The positive retroaction mechanism
always involves fixed points in competition, and the updating of one of
them implies the creation of a monopoly (be it of ideas or of any other
kind) and the exclusion of another (Machuco Rosa, 2002).

Often, the defenders of non centred architectonical solutions are not
conscious enough of the monopolization processes that they can
inadvertently create. None of that takes value from Clark s arguments
concerning Freenet s anonymity guarantee (mainly if complemented with
cryptography). But it points out the fact that non-centred solutions end
up creating counter-productive effects not intentionally specified in the
system s design principles, and that they can even go against those same
principles. In Freenet s case, counter-productivity becomes particularly
clear. Freenet is based on the concept of equal computers, peers, gifted
with only a local knowledge. All of them are the same, as well as
anonymous, inside the network s global structure. It s just that
technologically assuring that equality and anonymity has the consequence
of making some of the network s users no longer being effectively equal,
because the network discriminates their information. For them the network
ceases to have any value. It s not necessary to have a central device, for
discrimination to occur. On some networks it can even be a natural
process.  And even if you try to eliminate these counterproductive effects
nothing assures that they won t reappear in some other form.

Either way, it is correct to state that, in the Freenet network,
information requisition is made in a tendentiously distributed way,
undoubtedly breaking away from the server/client model typical of the
World Wide Web. This distributed character is assured by the
multiplication times n of information. However, this is not yet a truly
distributed network in the sense that memory is fragmented and it doesn t
satisfy the integral location principle. A future task would consist on
starting to create network architectures that would definitely break away
with the information storage principle on certain modules (computers).
Networks that could implement algorithms truly operating over a
distributed memory, and that could make a distributed data processing
network emerge. This would be to create networks that would counterpoint
current network architectures in the exact sense that the concept of
neuronal computer or artificial neuronal network is opposed to the usual
architectures of the digital computer. Even if this is a possibility that
seems to be very far away, it is the only one that would drastically
eliminate the possibilities of exterior control (Machuco Rosa, 1999, for
an introduction to the concept of artificial neuronal network).


Conclusion: the desire for anonymity

The network of networks Internet was depicted by many as a new possibility
for emancipation. For some, like John Gilmore or Tim May, the networks
architecture such as it existed in the beginning of the last decade would
intrinsically assure the values of emancipation. For others like Ian Clark
this architecture would have to be replaced so that the liberating
potential of computer networks became an effective reality. In both cases,
the idea is that a certain type of technologies, the non-centred design
technologies, would assure anonymity and privacy. These positions involve
ambiguities. They insist in such a way in the effects generated by
technologic automatisms that they forget that technologies are a product
of freedom, in the sense clearly exemplified by the information
technologies:  the values that the program s code-source conveys are the
result of a decision not of any automatism -, and the factors regulating
that decision are also not automatic, rather they belong to the level of
discussion of ideas and policies. On the other hand, the error gets worse
when one fails to understand that centred technologies, instead of
necessarily generating equality , possess a life of their own that goes
beyond the instructions expressed on their design initial conditions. It
consists on the appearance of what we have designated as counterproductive
effects, which, in general, are effects that go against the philosophy
animating the movement that goes from EFF to Ian Clark.


What philosophy is that? Why the incessant worry with problems such as
surveillance? Why insisting on the necessity of assuring a level of
anonymity as ample as possible? The answer seems to become clear, at least
in general terms in the space of this conclusion, if you notice that the
movement of cyber culture represents a step ahead in enhancing the
characteristics of modernity (Machuco Rosa, 1996 for a more detailed
analysis). This is, equality equality of condition, as a person, not
necessarily material equality must grow more and more in a world that has
progressively dissolved all, once said natural, hierarchies. Internet will
have been seen by some as an extremely powerful tool that would finally
make all exteriorities transcendent to individuals go away. There lies the
project for searching for total anonymity, as one of the meanings of this
kind of anonymity is precisely that surveillance, and therefore
discrimination and the violation of the principle of individuality cannot
occur. The movement towards equality, as many have referred, is an
unstoppable tendency, but it is highly questionable that it can be fully
accomplished through automatism. In the desire that automatism brings
equality, seems to finally live the tendency to give back to an exterior
entity an automatism, precisely what is supposed to be an immanent task.  
The devolution towards something regulating from outside seems to always
reappear, but that the regulator is itself a machine is something that
fills the imaginary of modernity taken to its extreme.

---


Bibliographic References

  Arthur, W. B., (1987), Self-Reinforcing Machanisms in Economics , In The 
Economy as an Evolving Complex System, P. Anderson & al (eds), 
Addison-Wesley, Redwood, pp.9-32.

Arthur W. B., (1994), Increasing returns and Path dependance in the 
Economy, University of Michigan Press, Ann Arbour.

Barabási, A., Réka, A., Jeong. H., (1999), Mean-field theory for scale-free 
random networks , Physica A, 272, pp. 173-187.

Baran, P., (1964), Introduction to Distributed Communications Networks, 
RM-3240-PR, August, In: http://www.rand.org/ publications/ RM/baran.list.html.

Berners-Lee, T, (2000), Weaving the Web: The Original Design and Ultimate 
Destiny of the  World Wide Web,  HarperBusiness, N.Y.

Bunde, A., Havlin, S., (eds), (1996), Fractals and Disordered Systems, 
Springer, Berlin.

Clark, I., (1999), A Distributed Decentralised Information Storage and 
Retrieval System, Division of Informatics- University of Edinburgh, 
In:  http://freenet.sourceforge.net .

  Clark, I., (2000), The Freenet Project- Rewiring the Internet , In 
http://freenet.sourceforge.net.

  Hafner, K., & Mathew, L., (1996) Where Wizards Stay up Late, Simon and 
Shuster, New York.

Lessig, L., (1999), Code and Other Laws of Cyberspace, Basic Books, New York.

Lessig, L. (200), The Future of Ideias, Random House, New York.

Levy, S., (1993) Cripto rebels In P. Ludlow (ed.), High Noon on the 
Electronic Frontier, Mit Press, Cambridge, 1966.

Machuco Rosa, A.,  (1996) Ciência, Tecnologia e Ideologia Social, E. U. 
Lusófonas, Lisboa.

Machuco Rosa, A., (1998), Internet- Uma História, E.U. Lusófonas, Lisboa.

Machuco Rosa, A., (1999), Tecnologias da Informação - Do Centrado ao 
Acentrado , Revista de Comunicação e Linguagens, 25, pp. 193-210.

Machuco Rosa, A., (2002a), Dos Mecanismos clássicos de controlo às redes 
complexas In Crítica das Ligações na Era da Técnica , J.B. Miranda and M. 
T. Cruz (org.,), Tropismos, Lisboa, pp. 133-153.

Machuco Rosa, A., (2002b), Dos Sistemas Centrados aos Sistemas Acentrados - 
Modelos em Ciências Cognitivas, Teoria Social e Novas Tecnologias da 
Informação, Vega, Lisboa.

Machuco Rosa, A., (2002c), Redes e Imitação , In A Cultura das Redes, M.L. 
Marcos e J.B. Miranda (org.), Revista de Comunicação e Linguagens, 2002, 
no. extra, pp. 93-114.

May, T., (1996) Introduction to the Black Net , In P. Ludlow (ed.), High 
Noon on the Electronic Frontier, Mit Press, Cambridge, 1996.

Stalmann, R., (1992), Why Software Should Not Have Owners , In: 
http://www.stallman.org/.


Notes:

[1] Concerning PICS, look up http://www.w3w.org. W3W is a non-profitable 
consortium that proposes and develops standards for the World Wide Web.

[2] S. Levy, S., Crypto rebels in P. Ludlow (ed.), High Noon on the 
Electronic Frontier, Mit Press, Cambridge, 1996, p. 186.

[3] Absolutely safe methods of cryptography are achieved by 128 bits long keys.

[4] Look up http://www.ptivacyinternational.org for more details.

[5] Cf. Richard Stalmann, The GNU Project, in http://www.gnu.org

[6] When writing these lines, a spectacular confirmation of this 
possibility came forth with the Chinese government s banning of the Google 
search engine (www.google.com). Like the rest of the world, the Google 
search engine has been turning into the most popular search engine in 
China, also due to the fact that it stores in memory the pages of the 
Chinese servers that are blocked by the Beijing government. Chinese users 
could, thus, access these pages using Google. Something good some will say. 
But, on one hand, this huge memory could be used for precisely opposite 
purposes and, on the other hand the Chinese government naturally prohibited 
access to Google from Chinese servers.

[7] Look up htp://www.oreillynet.com/ for a complete and updated panoramics 
of the P2P projects.




#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: majordomo@bbs.thing.net and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net