Fabricated Subjects : Reification, Schizophrenia and Artifical Intelligence

Date: Wed, 15 May 1996 15:00-EDT

From: Phoebe.Sengers@GS80.SP.CS.CMU.EDU

Fabricated Subjects:

Reification, Schizophrenia, Artificial Intelligence

Phoebe Sengers

Schizophrenia is the ego-crisis of the cyborg. How could it be any other way? Cyborgs are the fabrications of a science invested in the reproduction of subjects it takes to be real, a science whose first mistake was the belief that cyborg subjects were autonomous agents, that they existed outside any web of pre-existing significations. Pre-structured by all comers, but taken to be pristine, the artificial agent is caught in the quintessential double bind. Fabricated by the the techniques of mass production, the autonomous agent shares in the modern malady of schizophrenia. This paper tells the story of that cyborg, of the ways it has come into being, how it has been circumscribed and defined, how this circumscription has led to its schizophrenia, and the ways in which it might one day be cured.

The Birth of the Cyborg: Classical AI

The cyborg was born in the 1950's, the alter ego of the computer. It was launched into a world that had already defined it, a world whose notions of subjectivity and mechanicity not only structured it but provided the very grounds for its existence. It was born from the union of technical possibility with the attitudes, dreams, symbols, concepts, prejudices of the men who had created it. Viewed by its creator as pure potentiality, it was, from the start, hamstrung by the expectations and understandings which defined its existence.

Those expectations were, and are, almost unachievable. The artificial subject is one of the end points of science, the point at which the knowledge of the subject will be so complete that its reproduction is possible. The twin births of Artificial Intelligence and Cognitive Science represent two sides of the epistemological coin: the move to reduce human existence to a set of algorithms and heuristics and the desire to re-integrate those algorithms into a complete agent. This resulting agent carries all the burden of proof on its back; its ''correctness'' provides the objective foundation for a huge and complicated system of knowledge whose centerpiece is rationality.

Make no mistake, rationality is the central organizing principle of classical AI. The artificial agent is fabricated in a world where 'intelligence,' not 'existence,' is paramount, and 'intelligence' is identified with the problem-solving behavior of the scientist. For classical AI, the goal is to break intelligent behavior down into a set of more-or-less well-defined puzzles, to solve each puzzle in a rational, preferably provably correct, manner, and, one day, to integrate all those puzzle-solvers to create an agent indistinguishable (within a sufficiently limited framework) from a human.

That limited framework had better not exceed reason. Despite initial dreams of agents as emotionally volatile as humans, the baggage of a background in engineering quickly reduced agenthood to rationality. For example, Allen Newell, one of the founders of AI, wrote an influential paper which stated that the decision procedure of an agent must necessarily follow the ''principle of rationality.'' Any agent worthy of its name must have a set of goals it is pursuing, and any action taken must, in its opinion, help to achieve one of its goals. In the narrow constraints of this system, any agent that defies pure rationality is explicitly stated to be completely incomprehensible, and hence scientifically invalid.

Given these expectations, it was all too ironic when the artificial agent began to show signs of schizophrenia. Designing a rational decision procedure to solve a clearly defined puzzle was straight-forward; connecting these procedures together to function wholistically in novel situations proved to be well-nigh impossible. Bound in the straitjacket of pure rationality, the cyborg began to show signs of disintegration: uttering words it did not understand upon hearing, reasoning about events that didn't affect its actions, suffering complete breakdown on coming across situations that did not fit into its limited system of pre-programmed concepts. Being understood purely on its own terms and not with respect to any environment, the agent lived in a fabricated world of its own making, with only tenuous connections to shared physical and social environments. Autistic? Schizophrenic? In any case, deranged.

The Promise of Alternative AI

It was time for therapy. The shortcomings of the classical agent were becoming more and more obvious: it could play chess like a master, re-arrange blocks on command in its dream world, configure computer boards, but it could not see, find her way around a room, or maintain routine behavior in a changing world. It was defined and fabricated in an ideal, Platonic world, and could not function outside the boundaries of neat definitions. Faced with an uncertain, incompletely knowable world, it ground to a halt.

Understanding that the cyborg was caught in a rational, disembodied double bind, some AI researchers abandoned the terrain of classical AI. Alternative AI --- aka Artificial Life, behavior-based AI, situated action --- sought to treat agents by the redefinition of the grounds of their existence. No longer limiting itself to the Cartesian subject, the principle of situated action shattered notions of atomic individualism by redefining an agent in terms of its environment. An agent is, and should be, understood as engaged in interactions with its environment, and its 'intelligence' can only be gauged by understanding the patterns of these interactions. 'Intelligence' is not located in an agent but is the sum total of a pattern of events occuring in the agent and in the world---the agent no longer 'solves problems,' but 'behaves;' the goal is not 'intelligence' per se but 'life.'

Redefining the conditions of existence of the agent breathed new life into the field, if not into the agent itself. Where once there had been puzzle-solvers and theorem-provers as far as the eye could see, there were now herds of walking robots, self-navigating cans-on-wheels and other varieties of charming stupidity. Alternative AI had given the cyborg its body and had lifted some of the constraints on its behavior. No longer required to be rational, or even to use mental representations, the artificial agent found new vistas open to itself. It did not, however, escape schizophrenia. Liberated from the constraints of pure rationality, practitioners of alternative AI, unwittingly following the latest rages in postmodernism, embrace schizophrenia as a factor of living experience. Rather than creating schizophrenia as a side-effect, they explicitly engineer it in: the more autonomous an agent's behaviors are, the fewer traces of Cartesian ego left, the better. May the most fractured win!

At the same time, that schizophrenia becomes a limit-point for alternative AI, just as it has been for classical AI. While acknowledging that schizophrenia is not a fatal flaw, alternativists have become frustrated at the extent to which schizophrenia hampers them from building extensive agents. Alternativists build agents by creating behaviors; the integration of those behaviors into a larger agent has been as much of a stumbling block in alternative AI as the integration of problem-solvers is in classical AI. Alternativists are stuck with the major unsolved question of ''how to combine many (e.g. more than a dozen) behavior generating modules in a way which lets them be productive and cooperative.'' Despite their differences in philosophy, neither alternativists nor classicists know how to keep an agent's schizophrenia from becoming overwhelming. What is it about the engineering of subjectivities that has made such divergent approaches ground on the same problem?

Fabricating Schizophrenias

There can be no doubt that alternative and classical AI have very different stakes in their definitions of artificial subjectivity. These different definitions lead to widely divergent possibilities for the range of constructed subjects. At the same time, these subjects share a mode of breakdown; could it be that these agent-rearing practices, at first blush so utterly opposed and motivated by radically dissimilar politics, really have more in common than one might suspect?

The agents' schizophrenia itself can point the way to a diagnosis of the common problem. Far from being autonomous and pristine objects, artificial agents carry within themselves the fault lines, not only of their physical environment, but also of the scientific and cultural environment that created them. The breakdowns of the agent reflect the weak points of their construction. It is not only the agents themselves that are suffering from schizophrenia, but the very methodology that is used to create them -- a methodology which, at its most basic, both alternative and classical AI share.

In classical AI, the emphasis is on agent as problem-solver and rational goal-seeker, and agents are built using functional decomposition. The agent is presumed to have a variety of modules corresponding more or less to problem-solving methods in its mind. Researchers work to 'solve' each method, creating self-contained modules for vision, speaking and understanding natural language, reasoning, planning out behavior, learning, and so on. They hope that once they've built each module, they can with not too much effort glue them back together again and, presto, a complete problem-solving agent appears. This is generally an untested hope, since integration, for classicists, is at once undervalued and nonobvious. Here, schizophrenia appears as an inability to seamlessly integrate the various competences into a complete whole; the various parts have conflicting presumptions and divergent belief systems, turning local rationality into global irrationality.

For practitioners of alternative AI, the agent is thought of behaviorally, and the preferred methodology is behavioral decomposition. Instead of dividing the agent into modules corresponding to the various abstract abilities of the agent, the agent is striated along the lines of the behaviors it engages in. An agent might typically be constructed by building modules that each engage in a particular observable behavior: hunting, exploring, sleeping, fighting. Alternativists hope to avoid the form of schizophrenia under which classicists suffer by integrating all the agent's abilities from the start into specific behaviors in which the agent is capable of seamlessly engaging. The problem, again, comes when those behaviors must be combined into a complete agent: the agent knows what to do, but not when to do it or how to juggle its separate-but-equal behaviors. The agent sleeps instead of fighting, or tries to do both at once. Once again the agent is not a seamlessly integrated whole but a jumble of ill-organized parts.

At its most fundamental, in both forms of AI, an artificial agent is an engineered reproduction of a 'natural' phenomenon and consists of a semi-random collection of rational decision procedures. Both classical and alternative AI use an analytic methodology, a methodology that was described by Marx long before computationally engineering subjectivities became possible: ''the process as a whole is examined objectively, in itself, that is to say, without regard to the question of its execution by human hands, it is analysed into its constituent phases; and the problem, how to execute each detail process, and bind them all into a whole, is solved by the aid of machines, chemistry, &c'' (Marx 380). In AI, one analyzes human behavior without reference to cultural context, then attempts, by analysis, to determine and reproduce the process that generates it. The methodology of both types of AI follows the straight, narrow, and ancient road of objective analysis, with the following formula:

1. Identify a phenomenon in the world to reproduce.

2. Characterize that phenomenon by making a finite list of properties that it has.

3. Reproduce each one of these properties in a rational decision procedure.

4. Put the rational decision procedures together, perhaps under another rational decision procedure, and presume that the original phenomenon results.

The hallmarks of objectivity, reification, and exclusion of external context are clear. Through their methodology, both alternative and classical AI betray themselves as, not singularly novel sciences, but only the latest step in the process of industrialization.

In a sense, the mechanical intelligence provided by computers is the quintessential phenomenon of capitalism. To replace human judgement with mechanical judgement - to record and codify the logic by which rational, profit-maximizing decisions are made - manifests the process that distinguishes capitalism: the rationalization and mechanization of productive processes in the pursuit of profit.... The modern world has reached the point where industrialisation is being directed squarely at the human intellect. (Kennedy 6)

This is no surprise, given that AI as an engineering discipline has often been a cozy bedfellow of big business. Engineering and capital are co-articulated; fueled by money that encourages simple problem statements, clear-cut answers, and quick profit unmitigated by social or cultural concerns, it would in fact be a little surprising if scientists had managed to develop a different outlook. Reificatory methods seem almost inevitable.

But reification and industrialization lead to schizophrenia - the hard lesson of Taylorism. And the methodology of AI seems almost a replication of Taylorist techniques. Taylorists engaged in analyses of workers' behavior that attempted to optimize the physical relation between the worker and the machine. The worker was reduced to a set of functions, each of which was optimized with complete disregard for the psychological state of the worker. Workers were then given orders to behave according to the generated optimal specifications; the result was chaos. Workers' bodies fell apart under the strain of repetitive motion. Workers' minds couldn't take the stress of mind-numbing repetition. Taylorism fell prey to the limits of its own myopic vision.

Taylorism, like AI, demands that, not only the process of production, but the subject itself become rationalized. ''With the modern 'psychological' analysis of the work-process (in Taylorism) this rational mechanisation extends right into the worker's 'soul': even his psychological attributes are separated from his total personality and placed in opposition to it so as to facilitate their integration into specialized rational systems and their reduction to statistically viable concepts'' (88). This rationalization turns the subject into an incoherent jumble of semi-rationalized processes, since ''not every mental faculty is suppressed by mechanisation; only one faculty (or complex of faculties) is detached from the whole personality and placed in opposition to it, becoming a thing, a commodity'' (99). At this point, faced with the machine, the subject becomes schizophrenic.

And just the same thing happens in AI; a set of faculties is chosen as representative of the desired behavior, is separately rationalized, and is reunited in a parody of wholism. It is precisely the reduction of subjectivity to reified faculties or behaviors and the naive identification of the resultant system with subjectivity as a whole that leads to schizophrenia in artificial agents. When it comes to the problem of schizophrenia, the analytic method is at fault.

Schizophrenization and Science

Where does this leave our cyborg? Having traced its schizophrenia to the root, it would seem that the antidote is straightforward: jettison the analytic method, and our patient is cured. However, just as there are times when a patient cannot recover because his/her family needs him/her to be sick, the cyborg cannot recover because its creators cannot give up analysis. The analytic method is not incidental to present AI, something that could be thrown away and replaced with a better methodology, but rather constitutive of it in its current form.

First and foremost, both classical and alternative AI understand themselves as sciences. This means that they demand objectivity of all knowledge production in their domain. For something to be objective, the cultural and contingent conditions of its production must be forgotten; like the capitalist commodity-structure, ''[i]ts basis is that a relation between people takes on the character of a thing and this acquires a 'phantom objectivity,' an autonomy that seems so strictly rational and all-embracing as to conceal every trace of its fundamental nature in the relation between people'' (Lukacs, ''Reification and the Consciousness of the Proletariat'' 83). Science needs reification in order to make its historically accidental products appear ahistorically true.

More specifically, objectivity requires reification as an integral part of scientific methodology. As long as objectivity is to be the goal, the knowing subject must be carefully withheld from the picture. The scientist must narrow the context in which the object is seen to exclude him- or herself, as well as any other factors that are unmeasurable or otherwise elude rationalizing. ''The 'pure' facts of the natural sciences arise when a phenomenon of the real world is placed (in thought or in reality) into an environment where its laws can be inspected without outside interference.'' (''Orthodox Marxism'' 6). Objectivity requires simplification, definition, and exclusion; in AI it requires the analytic method.

The analytic method, after all, makes two movements: first, it reduces an observed phenomenon to a formalized ghost of itself; then, it takes that formalized, rationalized object as identical to the observed phenomenon. Formalization requires that one define every object and its limited context in terms of a finite number of strictly identifiable phenomena; it requires reification. This formalism is itself a requirement of objectivity; as the Hungarian cognitive scientist Mero puts it, ''The essence of the belief of science is objectivity, and formalization can be regarded as its inevitable but secondary outgrowth. From this aspect the formalized nature of scientific language is the epiphenomenon of objectivity.'' (187) The other part of the analytic method, the identification of science's view of an object with that object, is also necessitated by objectivity. Otherwise, if some part of the phenomenon were allowed to escape, what would be left of science's claims to absolute truth?

Thus, the analytic method is a direct result of AI's investments in science and the concomitant demands of objectivity. And if science inevitably and inexorably leads to schizophrenia, it is precisely because it takes its limited view of the subject for the subject itself. Only allowing for rational, formal knowledge, pure science is always exceeded by the subject, which, appearing as in a broken mirror, seems to be incomprehensibly heterogeneous.

Such a schizophrenia-in-the-eye-of-the-beholder can be liberatory for the subject, in that it allows the subject to move in ways that science can neither understand nor circumscribe. At the same time, scientific knowledge can seduce and/or fool the subject into not believing in any part of itself that exists outside of the scientific framework. Believing in its own schizophrenia, the subject no longer knows it is able to act. It understands itself as defined by increasingly specialized sciences: psychology, biology, sociology, economics, each supplying a limited set of explanations of cause-and-effect that only hold under the microscope and that don't add up to anything elsewhere but contradiction and confusion. The schizophrenia that subject suffers under is a result of the fact that it is taken as pure object by these sciences, and in this respect the subject's concomitant paralysis is only too convenient: the less the subject struggles, the less likely it is to leave the arena within which it is defined. In this respect, the schizophrenic subject is clearly the context-free product of science.

Beyond Schizophrenia? Towards a New AI

Again, where does this leave our cyborg? Far from being a liberation from rationality by alternative AI, its schizophrenia is the symptom of their under-the-table return to objectivity. Alternative AI makes an important and laudable move in recognizing schizophrenic subjectivity as part of the domain of AI and in moving beyond conceptions of subjectivity as pure rationality. Their notions of embodiment and the connection between agents and their environments have the potential to be revolutionary. However, alternative AI does not go far enough in escaping the problems that underlie desire for rationality. ''Schizophrenia is at once the wall, the breaking through this wall, and the failures of this breakthrough'' (Deleuze and Guattari 136). Alternative AI has reached the point of schizophrenia-as-wall and stopped, instead of continuing to break through.

In particular ''not going far enough'' means that alternative AI is still invested in many of the traditional notions of epistemological validity and in pure objectivity. Far from abandoning the traditional notions of objectivity, of engineering, and of agent divorced from its context, alternative AI and ALife in particular have shown an, if possible, even stronger commitment to them. The idea of creating subjectivities as an engineering process and of artificially fabricated subjectivity as a form of objective knowledge production is central to ALife as currently practiced. Alternative AI is seen as simply more scientific than classical AI.

Alternativists often recognize, for example, that symbolic programming of the kind classical AI engages in is grounded in culture. They believe that by abandoning symbolic programming they, unlike classicists, have also abandoned the problem of cultural presuppositions creeping into their work. One ALife researcher, for example, simply announces that he is no longer using these presuppositions: ''The term 'agent' is, of course, a favourite of the folk psychological ontology. It consequently carries with it notions of intentionality and purposefulness that we wish to avoid. Here we use the term divested of such associated baggage'' (Smithers 33).

Alternativists believe that, by connecting the agent to a synthetic body and by avoiding the most obviously mentalistic terminology, they have short-circuited the plane of meaning-production, and, hence, are generating pure and scientific knowledge. Rather than being the free-floating, arbitrary signifiers of classical AI, the symbols of alternative AI are 'grounded' in the physical world. Classical AI is considered to be 'cheating' because it does not have the additional 'hard' constraint of working in 'the real world' - a 'real world' which, alternativists fail to recognize, always comes pre-structured.

What is odd about this mania for objectivity is that the very concept of a hard split between an agent and the environment of its creation which objectivity necessitates really should have been threatened by the fundamental realization of alternative AI: that agents can only be understood with respect to the environment in which they live and with which they interact, an environment which presumably includes culture. In this light, the only way objectivity is maintained for alternativists is to leave glaring gaps in the defined environment where one might normally expect to see an agent's cultural connection. These definitions exclude, for example, the designer of the agent and its audience, both physical and scientific, who are in the position of judging the agentness, schizophrenia, and scientific validity of the created agent. That is, alternative AI fails in realizing its own conception --- at the point where it should realize its own complicity in the formation of the agent it instead remains tethered to the same limiting notions of objectivity that classical AI promotes.

At the same time, the difficulty alternative AI has in introducing more radical notions of agenthood and what it means to create an agent has a clear source --- it would require changing not only the definition of an agent, but some of the most deep-seated assumptions that structure the field, assumptions which define the rules by which knowledge is created and judged. This would seem on the surface nearly impossible, since any such change would be judged by the old rules. But at the same time the very schizophrenia which current agents suffer provides the possible catalyst for changing the field. The hook is that even the most jaded alternativists recognize schizophrenia as a technical limitation they would give their eyeteeth to solve. It is the solution of the problem of agent integration by means going beyond traditional engineering self-limitations, exclusions, and formalizations that will finally allow the introduction of non-objective, non-formalistic methodologies into the formerly pristinely scientific toolbox of AI.

What will these methodologies look like? The most fundamental requirement for the creation of these agents is the jettisoning of this notion of the 'autonomous agent' itself. the autonomous agent by definition is supposed to behave without influence from the people who created it or the people who interact with it. By representing the agent as detached from the process that created it, the relationship between designer and audience is short-circuited, making the role of the agent in its cultural context mysterious. Conveniently, this also allows its creators to distance themselves from the ethical implications of their work. They set the agent up to take the fall.

Instead of these presuppositions, essential for schizophrenizing the agent, I propose a notion of agent-as-interface, where the design of the agent is focused on neither a set of capacities the agents must possess nor behaviors it must engage in, but on the interactions the agent can engage in and the signs it can communicate with and to its environment. I propose the following postulates for a new AI:

1. An agent can only be evaluated with respect to its environment, which includes not only the objects with which it interacts, but also the creators and observers of the agent. Autonomous agents are not 'intelligent' in and of themselves, but rather with reference to a particular system of constitution and evaluation, which includes the explicit and implicit goals of the project creating it, the group dynamics of that project, and the sources of funding which both facilitate and circumscribe the directions in which the project can be taken. An agent's construction is not limited to the lines of code that form its program but involves a whole social network, which must be analyzed in order to get a complete picture of what that agent is, without which agents cannot be meaningfully judged.

2. An agent's design should focus, not on the agent itself, but on the dynamics of that agent with respect to its physical and social environments. In classical AI, an agent is designed alone; in alternative AI, it is designed for a physical environment; in a new AI, an agent is designed for a physical, cultural, and social environment, which includes the designer of its architecture, the creator of the agent, and the audience that interacts with and judges the agent, including both the people who engage it and the intellectual peers who judge its epistemological status. The goals of all these people must be explicitly taken into account in deciding what kind of agent to build and how to build it.

4. An agent is, and will always remain, a representation. Artificial agents are a mirror of their creators' understanding of what it means to be at once mechanical and human, intelligent, alive, a subject.

Rather than being a pristine testing-ground for theories of mind, agents come overcoded with cultural values, a rich crossroads where culture and technology intersect and reveal their co-articulation.

Under this new AI, agents are no longer schizophrenic precisely because the burden of proof of a larger, self-contradictory system is no longer upon them. Rather than blaming the agent for the faults of its parents we can understand the agent as one part of a larger system. Rather than trying to create agents that are as autonomous as possible, that can erase the grounds of their construction as thoroughly as possible, we understand agents as facilitating particular kinds of interactions between the people who are in contact with them.

Fabricated subjects are fractured subjects, and no injection of straight science will fix them where they are broken. It is time to move beyond scientifically engineering an abstract subjectivity, to hook autonomous agents back into the environments that created them and wish to interact with them. Their schizophrenia is only the symptom of a much deeper problem in AI: it marks the point of failure of AI's reliance on analysis and objectivity. To cure it, we must move beyond agent-as-object to understand the roles agents play in a larger cultural framework.

Acknowledgements

This work was supported by the Office of Naval Research under grant N00014-92-J-1298. I would like to thank Stefan Helmreich and Charles Cunningham for comments on drafts of this paper, and the many kind individuals who typed this paper: Stephanie Byram, Lin Chase, Thorsten Joachims, James Landay, Sue Older, Scott Neill-Reilly, Jim Stichnoth, Nick Thompson, and Peter Weyhrauch.

Works Cited

Brooks, Rodney A. ''Elephants Don't Play Chess.'' In Designing Autonomous Agents: Theory and Practice from Biology to Engineering and Back. Ed. Pattie Maes. Cambridge: MIT Press, 1991.(3-15)

Deleuze, Gilles and Felix Guattari. Anti-Oedipus: Capitalism and Schizophrenia. Trans. Robert Hurley, Mark Seem, and Helen R. Lane. New York: Viking Press, 1977.

Kennedy, Noah. The Industrialization of Intelligence: Mind and Machine in the Modern Age. London: Unwin Hyman, 1989.

Lukacs, Georg. ''Reification and the Consciousness of the Proletariat.'' History and Class Consciousness: Studies in Marxist Dialectics. Trans. Rodney Livingstone. Cambridge, MA: The MIT Press, 1971.

Lukacs, Georg. ''What is Orthodox Marxism?'' History and Class Consciousness: Studies in Marxist Dialectics. Cambridge, MA: The MIT Press, 1971. (1-26)

Marx, Karl. Capital: A Critique of Political Economy. Volume I: ''The Process of Capitalist Production.'' Trans. Samuel Moore and Edward Areling. Ed. Frederick Engels. New York: International Publishers, 1967.

Mero, Laszlo: Ways of Thinking: The Limits of Rational Thought and Artificial Intelligence. Trans. Anna C. Gosi-Greguss. Ed. Viktor Meszaros. New Jersey: World Scientific, 1990.

Newell, Allen. ''The Knowledge Level.'' CMU CS Technical Report CMU-CS-81-131. July, 1981.

Smithers, Tim. ''Taking Eliminative Materialism Seriously: A Methodology for Autonomous Systems Research.'' In Towards a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life. Ed. Francisco J. Varela and Paul Bourgine. Cambridge, MA: MIT Press, 1992. 31-47.

Phoebe Sengers is doing a interdisciplinary Ph.D. in Artificial Intelligence and Cultural Theory at Carnegie Mellon University, Pittsburgh, PA USA. She is interested in understanding computational technologies as culturally situated, and using that understanding to develop new kinds of technology. Her thesis uses the schizophrenia manifested by complex integrated agents to uncover forgotten traces of their social and cultural situation. She is developing alternative technology based on a recognition of the designer's and audience's goal in constructing artificial subjectivity. My goal is to develop a technological practice that avoids naive faith in objectivity, as well as a theoretical understanding that goes beyond a knee-jerk rejection of technology.