Patrice Riemens on Tue, 18 Mar 2014 13:36:43 +0100 (CET)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Ippolita Collective, In the Facebook Aquarium Part One, section #8,

(continued from section 8, 1)

Understanding how Web 2.0 firms are evaluated in terms of worth and
earnings is no easy task - to say the least. But we could possibly use
some simple arithmetics to shed light on the issue. Let us assume that
Facebook's value in January 2011 was indeed $50 bn. At that time Facebook
claimed 5 lakhs users. $50bn divided by 500 millions equals $100, with
other words each and every Facebook account holder is worth 100 $1 notes.
Would I be one of the (ueber)rich investors on Goldman Sachs' client list
who'd bet, let's say, $50m (and has thus become 0.1% owner of Facebook), I
would just pay some sucker - for a song of course - to create an account.
Or rather 1000 accounts - with a lot of links and entries (easy to do with
customized software doing it automatically). Thus, at the rate of $100 for
each account created, I make $1 lakh. I spend $50 on each account for 'the
work', and get $100 in return. In case there is any rich person among our
readers, let her or him please make her/himself known to us since we know
how to 'generate' hundreds of Facebook accounts and we would gladly accept
some of all that money being created out of thin air! this is actually the
message of so-called 'abundance capitalism': everybody's going to get rich
without doing anything, since the machines will do all the work for us.
But for the time being the machines are mostly placing bets on the stock
exchange, using sophisticated algorithms, all this within an increasingly
competitive and aggressive cultural environment while inflicting ever
higher workloads on humans. And no consideration whatsoever is being paid
by greedy economic operators to the disastrous consequences this has on
individuals' lives. It has been proved over and again that the cult of
chance which is emblematic for the stock exchange, enhances a positive
assessment of risk-taking and hence encourages irresponsible or even
downward criminal behaviour.

Free Choice and the /Opt-out/ Culture

Social network gurus have a lot in common with financial traders. They are
young, 'hungry', without scruples, white and male ... and with
relationship blues. We will come to talk - at length - about /nerd
supremacy/ later on. For the time being, let's  simply state that going by
Mark Zuckerberg's positions with regard to social practices and believing
he has got hold of the magic recipe is tantamount to entrusting one's
dentition to a dentist with rotten teeth. Even if he is a great
practitioner, the least that can be said is that he doesn't care very much
about his own outlook. let us not forget that the Good Sheperd here is
more interested in the data we are supplying than in our well-being. And
in the end, it could very well be that this radical transparency idea is
the mechanised solution that has been devised in order to remedy the
unabillity to manage personal relationships through reasoned choices.

Speaking of 'free choice', there is a corollary to the power 'by default'
which is worth noticing: the culture of /'opt-out'/. To modify the
settings of millions of users without notifying them (of the change),
giving them only scant and obscure information about it, and this always
after the fact, is the same as to state, by implication, that users
themselves have no clue about what they really want, or at least, that
their service provider knows better than they do themselves. Digital
social networks accumulate humongous amounts of users' data and know how
to monetize these with increasing efficacy thanks to retro-active systems
('votes', 'likes', forward to a friend, notify fraudulent messages, etc.).
All this since they do know the real identity of their users and have a
more encompassing view of them than they possibly could have themselves.
Seen from their side it is logical to think that any change will be of
benefit to them, since the data proves it in an unequivocal way. And this
being so, users can always decide to remain outside, to choose to forgo
this innovation, to /opt-out/. The parity  new = better is easy to grasp,
hence innovation imposes itself all by itself. Yet this issue is a very
uncomfortable one , since, technically speaking, it is increasingly
difficult to enable millions of users to choose easily what should be
shared, and how to share it, by explicitly asking for their consent, and
hence permitting them to express a wish, desires, preferences, or
(outright) will, and so to operate within an /opt-in/ logic (meaning to
choose to enter, to adhere (in)to the new functionality). Also, as we see
in the 'Google culture', celebrating the cult of innovation, of permanent
research and development, means that all the newness is usually released
in beta version, and hence not yet tested. The users are expected to
submit usable feedback so as to achieve true usability. Imposing a change
that turns crappy then becomes a manageable risk, since it can always be
redressed if too many users start complaining.

Let's give a concrete example here. From december 2010 onwards, Facebook
started providing users with a face recognition functionality which tagged
automatically (their) posted pictures. Pictures were scanned and faces
identified through earlier pictures memorized and tagged in Zuckerberg's
massive databases. When this software was introduced in the United States,
it caused a tsunami of complaints due to the menace it represented to
privacy. Whereupon Zuckerberg retorted that users could always
des-activate that functionality, by simply modifying their settings and
/opting out/ of the picture tagging function. But of course, when the new
technology was internationally released, Facebook didn't bother to tell
its clients (whether individual users or commercial partners) that the
face recognition software had been activated by default on the social
network. Facebook is not alone in this: Google, Microsoft, Apple, and the
United States Government all have been busy for long time developing new
automated facial recognition systems, 'for the good of users', and to
'protect them against dangerous terrorists'. But this technology also
harbors a terrifyingly destructive power: in the worst case scenario, an
authoritarian regime can semi-automatically 'tag' dissidents' faces
captured in the streets by CCTV, establish a reticular system of
surveillance, and then bounce at the time it choses. And in our democratic
societies, the technology is simply accessible to any ill-intentioned (but
tech savvy) person. /Opt-out/ logic (actually) follows the hallowed rule
of developers: RERO, or /release early, release often/. The aim being to
provide as often as possible a new version of a software. Upon which the
bugs, made shallow by the many eyes who observe and improve the programs,
are flushed out in the successive versions. Yet, social relations are not
quantifiable in logical cycles. And the evaluation mistakes that are made
when a new technology is released can cause truly ghastly collateral

(to be continued)

next time: anti-social 'webization'.


(no notes in this part!)

Translated by Patrice Riemens
This translation project is supported and facilitated by:
The Institute of Network Cultures, Amsterdam University of Applied Sciences
The Antenna Foundation, Nijmegen
( - Dutch site)
( - english site under construction)
Casa Nostra, Vogogna-Ossola, Italy

#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info:
#  archive: contact: