www.nettime.org
Nettime mailing list archives

<nettime> Ippolita Collective: The Dark Side of Google (Chapter 2, first
Patrice Riemens on Fri, 13 Mar 2009 13:14:33 -0400 (EDT)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Ippolita Collective: The Dark Side of Google (Chapter 2, first part)


Dear Nettimers,

There will be a short interruption in the translation flow as I am taking 
the train to Chennai/Madras and Madurai this afternoon, back on tuesday 
morning. So long!

Cheerio,
patrizio & Diiiinooos!
(going to visit http://www.antennaindia.org)

-----------------------------------------------------------------------------

NB this book and translation are published under Creative Commons license 
2.0 
(Attribution, Non Commercial, Share Alike).
Commercial distribution requires the authorisation of the copyright 
holders:
Ippolita Collective and Feltrinelli Editore, Milano (.it) 


Ippolita Collective

The Dark Side of Google (continued)


Chapter 2  BeGoogle!

Google's Brain-drain or the War for Control of the Web

"I'll #%$&*^! this Google &^%$# {AT} ! Eric Schmidt is a $%&*^#! and I'll bury 
him alive, like I did with other %#*( {AT} &^! like him!". Thus foulmouthed 
Microsoft's CEO Steve Balmer when he learned in May 2005 that Google had 
just headhunted Kai-Fu Lee, a high ranking employee of his, and key-man of 
'Redmond' for China. Kai-Fu was the one who had developed MSN Search 
(engine) for the 100 million Chinese Microsoft users. Balmer's expletives 
were of course targeted at his opposite number at Google, a former Sun 
Microsystems and Novell honcho, firms also that Microsystem had battled 
with before, both on the market and in court. Kai-Fu Lee was boss of the 
MS research lab near Shanghai.

Microsoft immediately started a court case against its former employee and 
Google specifically, accusing Kai-Fu Lee of violating extremely 
confidential contractual agreements existing between the Redmond and 
Mountain View rivals. Microsoft lawyers argued that Kai-Fu Lee, as 
executive director, must be in the know of MS industrial and trade 
secrets, and would not hesitate to put these technologies and the social 
network and economic know-how he had accrued at MS to use to bolster the 
profits of the competitor's firm. This contentious personage didn't come 
cheaply by the way. His entry 'salary' amounted to US$ 2,5 million, with 
20.000 Google shares as a side perq.  Exorbitant figures which give some 
idea of the wager at stake - and we were not only talking about the 
Chinese market.

The lawsuit between the two giants was finally settled out of court in 
December 2005 - with just one month left before the case was to come up. 
The particulars of the deal are completely confidential. May be large 
amounts of money have changed hands, or may be Microsoft managed to force 
Kai-Fu Lee to keep mum about anything he knew from his previous 
employment.

This story is merely one of the most curious and emblematic illustrations 
of a trend that had become noticeable for a few years now: Kai-Fu Lee was 
actually only the umpteenth senior employee that had switched to Google, 
"the firm that looks more and more like Microsoft" as Bill Gates had 
loudly complained. Bill himself was left in a cleft shtick as he faced the 
nasty choice of either diabolising the two student prodigies - reinforcing 
thereby their image as his 'kind and generous' opponents in the world of 
IT - or to pretend that they did not really matter and were not very much 
worth of attention as competitors . 

The truth is that Bill Gates knew all too well how much a switch-over of 
managers means for a firm's core business, especially in the IT sector: 
Microsoft had often enough made use of the same trick  against its own 
competitors. The commercial tactic consisting in headhunting key personnel 
of rival firms in order to tap their industrial secrets and their 
production and resources management know-how has always been part and 
parcel of industrial competition. But in the era of the information 
economy, the practice has become markedly more prevalent, and more 
diffuse.

So this management choice of Brin's and Page's clearly indicates what 
Google's ultimate aims are: to become the Web's most comprehensive and 
customizable platform, by adapting all its services to the singular needs 
of each of its users, and this together with maintaining an immense 
reservoir of information. To put it simply, Google is pushing full speed 
ahead to catalogue every type of digital information, ranging from 
websites to discussion groups, picture galleries, e-mails, blogs, and 
whatever you can think of, without any limit in sight. This amounts to 
open war with Microsoft, whose Internet 'Explorer' browser, MSN portal, 
and its Hotmail e-mail service, makes it after all, and for the time 
being, Google's principal foe [competitor].

The overlap between the domains of interest of both firms is growing by 
the day: both aspire to be the one and only medium to access whichever 
digital activity. Microsoft has achieved predominance by imposing its 
Windows operating system, its Office software suite {and its Explorer 
browser} as the current computing standard both at work and at home. On 
its side, Google has been profiling itself as the global number one 
mediator of web services, especially with regard to search, its core 
business, offered in all possible formats, but also with particular 
ancillary services such as e-mail ('GMail'). At the risk of 
simplification, one could say that Microsoft has been for years in a 
dominant position thanks to products that pertain to services, whereas 
Google is now seeking dominance through services running on products. 

The outcome of this competition, hence, is dependant on users' choices and 
on the future standards Google wants to impose. Developing certain web 
programmes intended to funnel requests for services only through the 
browser amounts to deny a market to those who have always invested heavily 
in products and in creating new operating software architecture. [French 
text unclear here, I guess the gist is: Google's going to literally 
vaporize all 'static' M$ products by going full tilt for the 'Internet in 
the clouds' paradigm, cf. next sentence -TR]. The same holds true for 
/markets in/ the economy at large: there is a shift from a wholesaler/mass 
market approach (Microsoft), trying to sell licenses of one and the same 
product or service, to a completely customised one, where products can be 
downloaded from the web.


Long tails in the Net. Google vs. Microsoft in the economy of search

Google's second line of argument is based on the key point John Batelle 
made in his numerous writings [*N2]: the ascent of the 'economy of 
search'. In his essay "The Second Search", Batelle, who is a journalist 
and counted amongst the founders of WIRED magazine, argues that the future 
of on-line commerce lies with personalised searches paid by the users 
[themselves?]. Google, which is sitting on top of the largest data-bank of 
'search intentions' by users, finds itself in the most advantageous 
position to make this possible, thanks to its very finely ramified 
network, made up on one side by a famously efficient advertising platform 
(AdWords) and on the other, by a bank of advertisers (AdSense) now good 
for several millions of websites. Google's wager is that it will be able 
to satisfy any wish the users may express through their search query, by 
providing new services geared towards 'consumerism at the individual 
level'. Each and every user/customer will hit exactly what she/he wants, 
the product that is precisely geared to her/his needs. The best known of 
these 'mass personalised' online services is the one offered by 
Amazon.com, which is well on its way to make far more money out of selling 
books or Cd's one at a time to individual customers than to pile up 
hundreds or even thousand of copies of a best seller. The numerous 
customers buying not particularly well-selling books online constitute a 
myriad of 'events' infrequently occurring in themselves, and sometimes 
even only once. To be able to satisfy nevertheless such 'personalised 
searches' is the secret of Amazon.com's distribution power. It would be 
impossible for a traditional book-seller, whose operational model is 
stacked on shops, stocks, and limited orders, to have the ease of delivery 
of million of titles at once Amazon.com has: most of its revenues has to 
come from novelties and best-sellers. Selling one copy of a book to a 
single customer is not profitable for a traditional bookshop, but is is 
for Amazon.com, which capitalizes on the 'economy of search' of the 
'online marketplace'.

This type of market is called 'long tail' in {new} economic parlance 
[*N3]. The theory of 'long tails' goes at least back to 'Pareto's 
distribution' [*N4], where "there a few events that have a high 
occurrence, whereas many have a low one". Statistically such distribution 
is represented by a hyperbole {graph} where the 'long tail' is made up of 
a myriad of events that are pretty much insignificant in themselves, but 
which taken together, represent a considerable sum. Mathematically 
speaking, a 'long tail' distribution follows the pattern of what is called 
"power's law" [*N5].

The 'winning strategy' in a long tail market hence is not to lower prices 
on the most popular products, but to have a wider range of offerings. This 
makes it possible to sell 'searchlight products' while selling few items 
at a time, but out of a very large range of different products.

Commercially speaking, it turns out that the highest sales occur in the 
realm of small transactions. The largest part of sales on the net is a 
long tail phenomenon. Google makes turnover by selling cheap 
advertisements to millions of users with text ads, not by selling a lot of 
advertising space in one go to a few big firms for a hefty fee.

Batelle takes interest in the application of search into not yet explored 
markets. In the case of Google, the enormous amount of data that is 
available in order to make searches possible is what has made the milking 
of the 'long tail' possible. In the domain of e-commerce, long tails have 
three consequences: first, thanks to the Internet, it becomes possible for 
non-so-frequently asked products to collectively represent a larger market 
than the one commanded by the small number of articles that do enjoy large 
sales; second, the Internet favors the proliferation of sellers - and of 
markets (such as is illustrated by the auction site eBay); and thirdly, 
thanks to search, the shift from {traditional,} mass market to that of 
niches becomes a realistic scenario.

This last tendency finds its origin in the spontaneous emergence of groups 
{of like-minded people}, something occurring on a large scale in networks. 
On the Internet, even the most important groups by number are not 
necessarily made up of homogeneous masses of individual people, but rather 
of colourful communities of users banding together because of a shared 
passion, or a common interest or goal. The opposition between niche and 
mass therefore is not very relevant to the identification of the segment 
of the market that is aimed at. From a commercial point of view, this 
leads to the creation of e-commerce sites for products attractive only to 
a very specific type of potential customers, who would never have 
constituted a profitable market outside online distribution. Take for 
instance typically geeky tee-shirts, or watches giving 'binary' time, 
flashy computer boxes or other must-have items targeted at the techie 
crowd. The amplitude of the supply makes good for the narrowness of the 
demand, which is spread over a very extensive range of highly personalised 
products. An interesting article by Charles H. Ferguson [*N6] points out 
that in such a scenario, it is most likely that Google and Microsoft will 
confront each other for real for the control [monopoly?] of indexing, 
searching, and data-mining, and this over the full spectrum of digital 
services and devices. Microsoft is now massively investing in web 
services: in November 2004 it launched a beta version of a search engine 
that would answer queries made in everyday language, and  return answers 
that would be personalised according to geographical location of the user; 
in February 2005, this MSN Search engine was improved further [*N7]. with 
MSN Search, it becomes possible to check out Encarta, Microsoft's 
multimedia cyclopedia. But for the time being, browsing is limited to two 
hours, with a little watch window telling you how much time remains ... 
Microsoft has thus decided to develop its own web search system on PCs, 
without resorting to Google, despite the fact that the latter has been for 
years now #1 in the search business (with Yahoo! as sole serious 
competitor).

Taken as a whole, it would appear that the markets that are linked to the 
economy of search are much larger than the existing markets for search 
services as such. Microsoft is undoubtedly lagging behind in this area, 
but the firm from Redmond might wel unleash {its trademark} savage 
strategies, which would be difficult for Google to counter. It could for 
instance take a loss on investments, integrate its search engine to its 
Explorer browser and offer the package for free, or start a price war on 
advertisements and so starve its competitor of liquidity. And in the 
meanwhile, the new Windows Vista operating system developed in Redmond is 
supposed to offer innovative search options [looks like fat chance...;-) 
-TR]. Also take note that Microsoft was lagging very much behind Netscape 
(the first web browser that was freely down-loadable) in its time , and 
yet Explorer managed to displace {and dispatch} it - and not really 
because it was so much better! But if Microsoft indeed has a long 
experience of the market and also very deep pockets, Google has not a bad 
hand either. It is the very incarnation of the young, emergent enterprise, 
it has built up a reputation as a firm that is committed to research and 
technical excellence, it preaches the gospel of speed with regard to 
users' search satisfaction and does so with nifty and sober interfaces, in 
one word it imposes itself by simply being technically the best search 
engine around. In the battle for control of the Web Google appears to have 
a slight advantage. However, one should not forget that Microsoft's range 
of activity is without par since it covers not only the Web but the whole 
gamut of information technologies, from tools like the Windows operating 
system or the MS Office suite, to contents like Encarta, and hi-end 
research platforms like dotNet, etc. Given the wager at stake - basically 
the access to any kind of digital piece of information, and the profits 
deriving from it - peaceful cohabitation {between the two giants} looks 
unlikely. Google is still in the race for now - but for how long? [MR/FCG: 
" but (G) won't be able to stand up very long"]


The War of Standards

Let's follow up on Ferguson's argument: the story starting now is a war of 
standards. Three actors are in the game, for now: Google, Yahoo!, and 
Microsoft. The industry of search, as Batelle pointed out also, is growing 
at a fast pace. Emerging technologies, or the ones that are currently 
under consolidation - think broadband enabled audio & video streaming for 
instance, or VoIP telephony (e.g. Skype, or Google's GTalk), or instant 
messaging - all are generating Himalayas of data still waiting for proper 
indexation. And proper 'usabilitization' for the full spectrum of new 
electronic vectors like pals, gsm's, audio-video devices, satellite 
navigators, etc - all these being interlinked for the satisfaction of 
users - but all milked in the end as supports for intrusive advertising. 
In order for these tools to be compatible with all kinds of different 
systems and which each other, new standards will be necessary, and their 
introduction /in the market/ is unlikely to be a painless process. What 
causes a war of standards is the fact that technology markets demand 
common languages in order to organise an ever-increasing complexity. The 
value of information lies in its distribution; but it is easier to spread 
around real tokens [? analog stuff?] than audio, or worse still, video 
documents: the heavier the data-load, the more powerful 'pipes' it 
requires and the more demands in puts on the way in which the information 
is managed [French text somewhat unclear here]. Traditionally, legal 
ownership of a crucial technology standard has always been the source of 
very comfortable revenues [*N8]. It has indeed happened that the adoption 
of an open, non proprietary standard  - such as the 'http' protocol - 
created a situation that is beneficial to all parties. But often, the 
dominant solutions are not qualitatively the best, as "it is often more 
important to be able to rely on a winning marketing strategy". This being 
said, there are a number of trends emerging regarding the winners. They 
usually sell platforms that work everywhere irrespective of the hardware, 
like for instance Microsoft's operating systems, and this as opposed to 
the closely integrated hardware and software solutions offered bay Apple 
or Sun Microsystems. Winning architectures are proprietary and difficult 
to reduplicate, yet they are at the same time very 'open', that is, they 
propose publicly accessible interfaces so that can be developed further by 
independent programmers, and in the end by the users themselves. In doing 
so, the architecture in question is able both to penetrate all markets and 
at the same time create a situation of attraction and 'lock-in'. Or to put 
it differently, it pulls the users towards a specific architecture, and 
once in, it becomes next to impossible to switch to a competing system 
without incurring great difficulties and huge expenses [*N9]. The aim: 
impose a closed standard, and obtain a monopoly.

A very clear illustration of the battle for hegemony in the domain of 
standards is the challenge Skype and GTalk are throwing at each other. For 
the time being, Skype enjoys a position of near-monopoly  on VoIP for 
domestic use. Yet it is possible that it has not well estimated the time 
needed by development communities to assimilate [embrace?] open 
technologies. Till not so long ago, Skype was the only solution that 
really worked for anyone wanting to put phone calls through the Internet - 
even if {the person was} technically clueless. Skype's proprietary 
technologies, however, could well be overtaken by GTalk, which is entirely 
based on F/OSS (and especially on the 'Jabber' communication protocol), 
all offered with development libraries under copyleft licenses, something 
that attracts a lot of creative energies in the project as coders vie to 
increase the punch of Google's VoIP network. In which case the adoption of 
F/OSS would prove to be the winning strategy and erode Skype's dominance. 
Of course, Skype could then choose to make its own codes public in order 
to tilt the ante back in its favor. The choice between adoption of 
proprietary technologies and platforms, closing access - but keeping 
public the development interfaces, and going for openness, is therefore of 
paramount importance in the strategy of control of the Web, and of the 
economy of search in general.

But access to the economy of search is already closed, or 'locked-in', as 
economists would say: that any new entrant, or 'start-up', could ever 
compete with Yahoo! or Google in the indexation of billions of web-pages 
is clearly unthinkable. Even if such a firm had a better algorithm for its 
spider, the investments needed would be prohibitive. Yet there are a lot 
of side-aspects, especially with regard to the interface between various 
search systems, which lend themselves for a bevy of 'mission critical', 
yet affordable innovations. This is for instance the case with 
'libraries', small pieces of software which make it possible to link up 
heterogeneous systems together and function as 'translators' between 
systems, languages, and search engines. Together with integration 
methodologies between various arrangements and ways to share data and 
search results, they represent areas that could be developed by 
individual, independent researchers rather than by large companies [*N10]. 
We will later into more details into the issue of interfaces and 
libraries.

For now, it is important to note that none of the players in this game is 
in a position of absolute dominance, something we can be thankful for. 
Imagine what the situation would be if a complete monopoly of search, by 
whatever private actor, existed by virtue of its factual imposition of one 
standard. Obviously, the first problem to arise would be the issue of 
privacy: who would own the indexed data on which searches would take 
place, reaping humongous profits in the process ? Moreover, since it is 
already possible to tap into quite an unbelievable amount of information 
just by typing the name of an individual in the Google search window, and 
since in a near future the quality and quantity of such information will 
not only be greatly increased, but even further augmented by the 
possibility to cross-search among heterogeneous data, one can assume that 
the control exercised on individuals will become ever more suffocating and 
totalitarian: it will cross-aggregate confidential data with medical 
records, phone conversations, e-mails, pictures, videos, blogs and opinion 
pieces, and even ADN info. Google would then become the premiere access 
point to the digital panopticon [*N11]. 
So let's us now have a look at the weapons that are deployed in this very 
real war for the control of the networks.

(to be continued)

--------------------------
Translated by Patrice Riemens
This translation project is supported and facilitated by:

The Center for Internet and Society, Bangalore
(http://cis-india.org)
The Tactical Technology Collective, Bangalore Office
(http://www.tacticaltech.org)
Visthar, Dodda Gubbi post, Kothanyur-Bangalore 
(http://www.visthar.org)


#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mail.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime {AT} kein.org