Felix Stalder on Fri, 9 Sep 2016 13:05:48 +0200 (CEST)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> living under algorithmic governance

Hi Florian,

> This story has one flaw: Facebook's censors aren't algorithms but
> human low wage laborers. The issue isn't principally different from
> that of a press distributor/wholesaler deciding not to put an issue
> of a newspaper on the newsstands because it contains full-frontal
> nudity.
> This is a good example of how supposed "algorithmic governance"
> can be used as a smokescreen for old-fashioned human intervention,
> likely as a trick for avoiding liability.

I'm not sure the analogy works, precisely for the reasons I mentioned
(arbitrary effects of opaque, scaled-up rule systems inflexibly
applied, and the extreme distance between those governing and those

But in terms of the involvement of humans, you are most likely
right, but I'm not sure that makes much of a difference. When I call
something "algorithmic", I don't imply that "no humans are involved
anywhere" or it is "just math". That used to be Google's argument,
precisely as the smoke-screen you refer to. But not even Google is
using this argument anymore.

I think there's a understanding emerging that algorithmic governance
is based on complex cyber-physical infrastructures and exercised by
social institutions who employ it to serve whatever goals they have.

The fact that this type of formely complex and high-skilled
decision-making is outsourced to low-wage, low-skill labor -- likely
located in peripheral countries in the right timezone -- indicates
already that decision-making has been so completely formalized and
standardized, that's only a matter of time until it can be automated.
Because it has been restructured to involve as little personal
judgment, and the social responsibility that comes from that, as

I think we see this quite often now, that we have machines
impersonating people and people rendered to impersonate machines.
Franco and Eva Mattes (0100101110101101.org) did a piece exactly about
this two-way impersonation in the context of content moderation called
Dark Content.

-> http://0100101110101101.org/dark-content/

This made possible by a) making machines learn and b) de-skilling
of human to the point when all they are allowed to do is following
scripts, a.k.a. behave like algos.

all the best. Felix


 ||||||||||||||||||||||||||||||||| http://felix.openflows.com
 |OPEN PGP: 056C E7D3 9B25 CAE1 336D 6D2F 0BBB 5B95 0C9F F2AC

#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: