Kevin Hamilton on Fri, 6 Mar 2009 17:09:20 -0500 (EST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> Cybernetics and the Internet


Responding here to this thread but also to Brian's most recent post/ 
essay online. I admit to skimming some of Brian's writing on this of  
late, and look forward very much to the project in book form. So  
forgive me if I'm repeating some of what he has already said.

The distinction between first-order and second-order cybernetics, and  
therefore possibly between a better and worse application of the  
cybernetic lens, invites further discussion about what that lens lends  
(and to what ends?)

First-order and second-order cybernetics both tend toward the adoption  
of biological metaphors - this is especially true for Von Foerster's  
crew, here at University of Illinois, where they worked under the  
banner of the Biological Computing Lab.

Cyberneticists look at everything as composed of systems - biological  
language seems to enter in as systems approach a certain level of  
complexity. If a particular set of systems interact at a level of  
complexity that seems too big for one human to grasp, cyberneticists  
liken that set of systems to something that's alive, a plant, forest  
or sentient creature.

Certainly there are real limits to human perception and cognition at  
work here, even if at the basic level of interface the designers and  
engineers work hard to make sure we're not so overwhelmed that we walk  
away on first glance.

But I think it's worth asking what is gained, or who gains, from  
describing a particular system as too big to grasp without imagining  
it as a living thing.

Thinking of the pre-Modern era to which Brian alludes in his essay, I  
can imagine different routes to the same end of imagining there to be  
a supra-human or extra-human hand behind the mysteries of the  
universe. For some, the attribution of agency and divine order to the  
conditions of life conveniently supported a grasp for power. For  
others, it was a desperate effort at making sense of unlivable  
situations. For still others, it was a humble hope for a better world  
than they had inherited. The same could be said of the cybernetic view  
today.

Let's look at two ends of the cybernetic era. The original cybernetic  
solution was Wiener's take on computing ballistics trajectories.  
Looking to the complex problem of accounting for distance, weather  
conditions, and moving targets in the task of landing a shell in the  
enemy's lap, Weiner applied calculus to describe a dynamic grid of  
numbers. Before, rooms of "human calculators" (who in America were  
mostly those classed as women, disabled, or Black) worked on finite  
tables containing every conceivable combination of variables to  
produce, months later, a handy reference chart for the army gunman.  
Wiener looked to these complex problems not as a collection of  
individual labor efforts that add up to a total solution, but as a  
system of possibilities that had a life of its own, a "self-organized"  
pattern that could be described and computed without actually doing  
the hard work of every algorithmic result.

By describing his original problem in cybernetic terms, Wiener cut out  
some labor, and thus cost. He cut out some time, and pushed the  
problem of aiming artillery closer into the battlefield, out of the  
homefront. The gap would begin to close between the detonator and the  
aiming mechanism, until they eventually fused as one device of looking  
and destroying.

Wiener also framed a problem in just the right way for a new emerging  
technology and research infrastructure, as early computing was only  
too happy to borrow from these approaches in the organization of  
efficient hardware, and eventually software. (Object-oriented  
programming perhaps owes something to Wiener, no?)

So the problem itself did not require an ecological, cybernetic  
metaphor to solve it - it was already solve-able. But by framing it in  
an approach that was more complex than the sum of human efforts,  
Wiener framed it more efficiently, and within terms governmental/ 
educational/corporate institutions could easily fold into their  
Taylorist managerial slipstream.

If that's an example of first-order cybernetics, let's turn to an  
example of second-order cybernetics. (For some the distinction is  
still controversial, and perhaps not really apt.)

Second-order cybernetics, or a "Cybernetics of Cybernetics," is often  
framed, as Brian describes, in terms of consciousness and  
subjectivity. The idea here is that one can't even begin to look at  
the world as composed of systems without acknowledging how human  
thought, consciousness and communication is itself composed of  
systems. Wiener's cybernetic description of the artillery aiming  
problem is itself the product a living, breathing, self-organizing  
system. It's here that cybernetics gets all reflexive and modernist -  
the analysis and management of feedback loops is itself a feedback loop.

In the examples of Varela, Von Foerster, and the rest of the Urbana  
crew, this went many different ways. For some, this second-order  
notion sent them into exploration of human consciousness as both a  
product of, and agent within, the educational and governmental  
institutions responsible for things like the Vietnam War. As has been  
documented well in Das Net and other places, the mind became ground- 
zero for political action - either as a Cold War battleground for the  
CIA, or as a launch point for liberation in college classrooms.  
Brian's written about this as well, how new attention to human  
subjectivity outlined both promise and peril.

But the "cybernetics of cybernetics" was equally applied to a less  
individualistic sphere - that of management and administration. Macy  
Conference alum Margaret Mead framed it this way in her address to the  
inaugural meeting of the American Society of Cybernetics in 1968. She  
describes two fateful scenarios of first-order cybernetics, before  
calling for a second-order cybernetics to save us from them. First she  
points to the spectre of the Soviet Union's near-accomplishment of a  
fully cybernetic society, "as a way of controlling everything within  
its borders and possibly outside, with thousands of giant computers  
linked together in a system of prodigious and unheard-of efficency."  
Then she points to how often in America, cybernetic solutions are  
reached but those in charge are never smart enough to apply them,  
resulting in failed infrastructure.

Her suggested solution is that a meta-cybernetics might be as  
efficient as the Soviet system, but would govern as a self-organizing  
system the very domain of international politics and economics, making  
for "smarter" leaders at the national and international levels. The  
appeal here is to a kind of consciousness or subjectivity at a massive  
scale. It's not the ontologically real A.I. of sci-fi, Orson Scott  
Card's Jane, for example. It's merely a way of describing a problem as  
so complex that it looks from the human perspective to be as alive and  
sentient as any human brain. And in the area of second-order  
cybernetics, it's a way of examining even the analysis and management  
of systems as itself a solve-able systematic problem.

Now back to my original question - what is gained, and by whom, in  
framing a problem this way?

We need a second-order example to compare to Wiener's artillery  
computing, so let's move forward to the present-day, 35 years after  
Von Foerster's lab folded, after the US military lost interest in  
funding "blue-sky" research with no battle-ready results.

At a recent NSF-sponsored conference on "Creativity and Cognition," I  
heard a brief riff of a spiel from William Wulf, then President of the  
National Academy of Engineering, now a Computer Science professor at  
University of Virginia, formerly an active edu-tech player in Richard  
Florida's model "creative city" of Pittsburgh. The gist of his talk,  
intended to be inspirational I think, was that the most urgent  
problems we face as a planet are so complex that engineering and  
science can no longer solve them. It will take "creativity" to solve  
these problems - meaning, I interpret, that not even an army of desk- 
workers or a state-full of computers can follow every algorithmic  
process to its end. In reframing the problem as one that requires  
"creativity," Wulf appealed to the power of perspective, subjectivity  
and intuition. He conjured a picture of a global network of  
interrelated concerns so statistically large that management cannot  
handle them efficiently, if at all.

Elsewhere Wulf describes this picture as an "Ecology of Innovation,"  
sounding every bit as cybernetically-minded as Mead. (He stops short  
of speaking in literal cybernetic terms, but then he would have to,  
given how cybernetics lost credibility in the sciences after the  
70's.) Wulf, an outspoken critic of current patent and intellectual  
property law, anticipates a "coming age of mass customization," in  
which low-wage labor is replaced by a "knowledge-intensive kind of  
manufacturing." His criticisms of patent law, antitrust legislation,  
and even drug-testing protocols are not that they are wrongly-based.  
Rather, they don't change fast enough. For Wulf, we don't need  
different laws, we need a different system within which those laws are  
developed, changed, managed. Citing Thomas L. Freidman, Wulf draws a  
boundary around a new domain of concerns that need to be better linked  
into one dynamic, changing system. This domain includes "intellectual  
property law, tax codes, patent procedures, export controls,  
immigration regulations."

The picture conjured here by this much-lauded leader is one strikingly  
reminiscent of Mead's "cybernetics of cybernetics," or even of  
Wiener's artillery firing data. For Wulf, science's old tools no  
longer suffice. It's not enough to break down the problems into small  
chunks and solve them - in part because in his picture of the world  
the chunks can't be separated, but also because if we solve them  
today, the same solutions won't work next week. Space is less  
differentiated or striated in this picture, time less linear and  
continuous. The target keeps moving, so we need to stop computing the  
distances and just assume that the target's life can be anticipated,  
merged with that of the gunner.

Few can argue that Wulf's picture isn't based on real conditions, real  
limits to human cognition and intervention. But what is gained by  
describing this world as an ecosystem, an entity with a life of its  
own? My fears here are multiple, and probably obvious.

When the larger system is described in terms of subjectivity and  
agency, what becomes of individual agency in either daily life or in  
efforts to impact and change the system?

When the boundaries of a problem are framed so as to supercede human  
perception, does it still take humans to solve it?

When the problem is described as so complex as requiring an  
interdisciplinary network of experts to address it, to what is that  
new extra-disciplinary body accountable?

There's a new form of citizenship implied and even explicit in Wulf's  
approach - just yesterday on this University campus, he delivered a  
talk on "Responsible Citizenship in the Technological Democracy."  
Citizens bear the brunt of keeping up with the policy issues at stake  
in the Innovation Ecology, and also must be prepared for precarious  
knowledge-based labor. Meanwhile, their living and working  
environments respond and morph dynamically in relation to needs,  
without stopping for a second to afford a close view or a close vote.

Second-order cybernetics seems to be alive and well, if even its  
current adherents are ignorant to its history. The questions that  
remain for me around cybernetics are around these issues of rhetoric,  
where cybernetic language and representation assumes a particular end  
for the world, a particular basis for action.

Kevin Hamilton

On Mar 5, 2009, at 1:52 PM, Brian Holmes wrote:

> Florian Cramer wrote:
>
>> One could even go farther back in history and say that the link  between
>> chaos and complexity theories, communication networks and  counterculture
>> created in Thomas Pynchon's 1966 novel "The Crying of Lot 49" already mapped
>> out the whole field and discourse.
>
> Yeah, I love that book. You're right, it does more or less map out
> the counter-cultural desire for cybernetics. The more rigorous
 <...>


#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mail.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org