Published in T. W. Bynam and J. H. Moor, eds.,The Digital Phoenix: How Computers are Changing Philosophy, Basil Blackwell, pp. 135-152, 1998.

Philosophical Content and Method of Artificial Life

Mark A. Bedau

Department of Philosophy
Reed College, 3203 SE Woodstock Blvd., Portland OR 97202, USA
Voice: (503) 771-1112, ext. 7337; Fax: (503) 777-7769
Email: mab@reed.edu
Web: http://www.reed.edu/~mab

Abstract

The field of artificial life is enriching both the content and method of philosophy. One example of the impact of artificial life on the content of philosophy is the light it sheds on the perennial philosophical question of the nature of emergent pheonomena in general. Another second example is the way it highlights and promises to explain the suppleness of mental processes. Artificial life's computational thought experiments also provide philosophy with a methodological innovation. The limitations of the central arguments in Stephen Jay Gould's Wonderful Life and Daniel Dennett's Darwin's Dangerous Idea illustrate the value of this new method.

Keywords: artificial life, life, thought experiment, evolution, emergence, mind, complexity.

Contemporary philosophy has taken an empirical turn, especially in the philosophy of psychology and the philosophy of biology. These same areas of philosophy are now becoming enriched by a new kind of empirical infusion coming from the field known as "artificial life." Developments in artificial life should interest philosophers for two reasons. One is that artificial life can help answer a number of philosophical issues. Many of these issues are catalogued by Bedau (1992), and the papers collected by Boden (1996) illustrate work in this vein. But there is also a methodological lesson in artificial life for its computational thought experiments are a natural tool for philosophers to adopt. After an overview of artificial life, this chapter will show how its content and method can benefit philosophical pursuits.

 

I

Artificial life can be situated within an interdisciplinary innovation devoted to understanding the behavior of complex systems. Examples of this new venture include the science of chaos (e.g., see Crutchfield et at. 1986; for discussion of some philosophical implications, see Stone 1989 and Kellert 1993) and the work spawned by Wolfram's studies of cellular automata (e.g., see the work collected in Wolfram 1994; see also Langton 1992). By abstracting away from the details of chaotic systems (such as ecologies, turbulent fluid flow, and economic markets), chaos science seeks fundamental properties that unify and explain a diverse range of chaotic systems. Similarly, by abstracting away from the details of life-like systems (such as ecologies, immune systems, and autonomously evolving social groups) and synthesizing these processed in artificial media, typically computers, the field of artificial life seeks to understand the essential processes shared by broad classes of life-like systems. Whereas biology's focus is to understand the central mechanisms of the life-as-we-know-it, artificial life's interest embraces all of life-as-it-could-be (Langton 1989)&emdash;in this way artificial life shares philosophy's characteristic concern with broad essences rather than narrow contingencies.

It is useful to contrast artificial life with the well-known analogous field of artificial intelligence (AI). Both devise computationally implemented models, but, whereas AI concerns cognitive processes such as reasoning, memory, and perception, artificial life concerns the processes characteristic of living systems. These processes include:

• spontaneous generation of order, self-organization, and cooperation;

• self-reproduction and metabolization;

• learning, adaptation, and evolution.

Roughly speaking, what AI is to psychology and the philosophy of mind, artificial life is to biology and the philosophy of biology.

Despite these similarities, there is an important difference between the modeling strategies artificial intelligence and artificial life typically employ. Most traditional AI models are top-down-specified serial systems involving a complicated, centralized controller which makes decisions based on access to all aspects of global state. The controller's decisions have the potential to affect directly any aspect of the whole system. On the other hand, most natural systems exhibiting complex autonomous behavior seem to be parallel, distributed networks of low-level communicating "agents." Each agent's decisions is based on information about only the agent's own local state, and its decisions directly affect only its own local situation. Following this lead, artificial life is exploring forms of emergent global order produced by bottom-up-specified parallel systems of simple local agents. Not only do artificial life models share the bottom-up architecture found in natural systems that exhibit complex autonomous behavior, but the flexible "intelligent" behavior that spontaneously emerges from artificial life models is also strikingly akin to that found in nature.

Thus, artificial life models share some important features with the new connectionist models that have recently revolutionized AI and its philosophical interpretation (Rumelhart and McClelland 1986, Horgan and Tienson 1991). Both use an architecture with a parallel population of autonomous "agents" following simple local rules, and both produce fluid macro-level dynamics. In fact, connectionism is increasingly exploring architectures and algorithms with artificial-life-like features, e.g., recurrent networks and new unsupervised adaptive learning algorithms. But there are important differences between typical artificial life models and the connectionist models that have attracted the most attention in the philosophical community, such as feed-forward networks which learn by the back-propagation algorithm. First, the micro-level architecture of artificial life models is much more general, not necessarily involving multiple layers of nodes with weighted connections adjusted by learning algorithms. Second, artificial life models employ forms of learning and adaptation that are more general than supervised learning algorithms like backpropagation. This allows artificial life models to side-step certain common criticisms of connectionism, such as the unnaturalness of the distinction between "training" and "application" phases and the unnatural appeal to an omniscient "teacher". Third, micro-level nodes in connectionist typically passively receive and respond to sensory information emanating from an independent external environment. The micro-level agents in artificial life, by contrast, play an active role in controlling their own sensory input (see, e.g., Parisi, Nolfi, and Cecconi 1992). Finally, the concern in the vast bulk of existing connectionist modeling is with equilibrium behavior that settles onto stable attractors. By contrast, artificial life models are typically concerned with a continual, open-ended evolutionary dynamic that never settles onto an attractor in any interesting sense.

Much of the appeal of artificial life models is shared by models in traditional artificial intelligence and connectionism. For one thing, computer models by their nature facilitate the level of abstraction required to pursue an interest in maximally general models of phenomena. In addition, the discipline imposed by expressing a model in feasible computer code entails a salutary precision and clarity and insures that all hypothesized mechanisms could actually work in the real world. But the distinctive bottom-up architecture of artificial life models creates a special virtue, for allowing micro-level entities continually to affect the context of their own behavior creates an importantly realistic complexity. For example, a population of organisms typically has an active hand in constructing the environment to which it adapts (Bedau, 1996b). Because of the network of interactions among organisms, an organism's adaptation to its environment typically changes the intrinsic properties of the external objects in its environment. Nevertheless, in order to insure analytical tractability, all too many models of organisms within an environment ignore these interaction. For example, Sober's (1994) model of phenotypic plasticity treats an organism's environment as fixed for all time and presumes that each organism confronts its environment in isolation from all other organisms. These simplifying assumptions do make Sober's model amenable to armchair analysis, but at the price of irrealism about the dynamic interactions between organism and environment. Such interactions would imply a population of entities "undergoing a kaleidoscopic array of simultaneous nonlinear interactions", as John Holland puts it (Holland 1992, p. 184), and analytically solvable mathematical models can reveal little about the global effects which emerge from a web of such interactions. The only way to study the effects of these interactions is to do what the field of artificial life does: build bottom-up models and then empirically investigate their emergent global behavior through computer simulations.

Artificial life models routinely do show impressive global phenomena emerging from simple micro-level interactions. Flocking behavior is an especially vivid example of this. Flocks of birds exhibit impressive macro-level behavior. The flock maintains its cohesion while moving ahead, changing direction, and negotiating obstacles. And these global patterns are achieved without any global control. No individual bird issues flight instructions to the rest of the flock; no central authority is even aware of the global state of the flock. The global behavior is simply the aggregate effect of the microcontingencies of individual bird trajectories. One might attempt to model flocking behavior by using brute force and specifying how each bird's moment-to-moment trajectory is affected by the behavior of every other bird in the flock, i.e., by the global state of the flock. An illustration of this kind of model (in a slightly different context) is the computer animation method used in the Star Wars movies. In Star Wars we see computer animation not of bird flocks but of fleets of futuristic spaceships. Those sequences of computer animated interstellar warfare between different fleets of spaceships were produced by human programmers carefully scripting each frame, positioning each ship at each moment by reference to its relationship with (potentially) every other ship in the fleet. This brute force modelling approach has two drawbacks. The first is that the behavior of the fleet seems somewhat scripted and unnatural. Second, as the size of the fleet grows, the computational expense of the brute force modelling approach mushrooms. Adding even one more ship in principle can require adjusting the behavior of every other ship in the fleet, so the brute force model succumbs to a combinatorial explosion.

It is natural to ask whether natural flocking behavior can be feasibly produced by a model which concerns only local behavior of simple individual agents. It turns out that in general the only way to answer this question is to try it and see what happens. Fortunately, Craig Reynolds has done this for us with his "Boids" model (1987, 1992). When one views Reynold's flocking demos, one is vividly struck by how natural the flocking behavior seems. The collection of individual boids spontaneously organize into a flock that then maintains its cohesion as it moves and changes direction and negotiates obstacles, fluidly flowing through space and time. The flock is a loosely formed group, so loose that individual boids sometimes lose contact with the rest of the flock and fly off on their own, only to rejoin the flock if they come close enough to the flock's sphere of influence. The flock appropriately adjusts its spatial configuration and motion in response to internal and external circumstances. For example, the flock maintains its cohesion as it follows along a wall; also, the flock splits into two subflocks if it runs into a column, and then the two subflocks will merge back into one when they have flown past the column.

The Boids model produces these natural, supple flocking dynamics as the emergent aggregate effect of micro-level boid activity. No entity in the Boids model has any information about the global state of the flock, and no entity controls boid trajectories with global state information. No boid issues flight plans to the other boids. No programmer-as-God scripts specific trajectories for individual boids. Instead, each individual boid's behavior is determined by three simple rules that key off of a boid's neighbors: seek to maintain a certain minimum distance from nearby boids, seek to match the speed and direction of nearby boids, and seek to steer toward the center of gravity of nearby boids. (In addition, boid's seek to avoid colliding with objects in the environment and are subject to the laws of physics.) Aside from the programmer's direct control over a few features of the environment (placement of walls, columns, etc), the model's explicit dynamics govern only the local behavior of the individual boids. Each boid acts independently in the sense that its behavior is determined solely by following the imperatives of its own internal rules. (Of course, all boids have the same internal rules, but each boid applies the rules in a way that is sensitive to the contingencies of its own immediate environment.) An individual boid's dynamical behavior affects and is affected by only certain local features of its environment&emdash;nearby boids and other nearby objects such as walls and columns. The Boids model contains no explicit directions for flock dynamics. The flocking behavior produced by the model consists of the aggregated individual boid trajectories and the flock's global dynamics emerges out of the individual boid's explicit micro-level dynamics.

Reynold's Boids model provides one illustration of how complex phenomena of living systems emerging from simple bottom-up artificial life models. This pattern has many other instances. Consider one more example&emdash;the phenomena of open-ended evolution, one of the hallmarks of living systems (Bedau 1996a). One might speculate about whether a single self-replicating entity by itself could spawn such a process, but a feasible model can decisively cut through this speculation. Tom Ray's (1992) Tierra is such a model. Tierra consists of a population of self-replicating machine language programs that "reside" in computer memory consuming the "resource" CPU time. A Tierran "genotype" consists of a specific type of string of self-replicating machine code, and each Tierran "creature" is a token of a Tierran genotype. A simulation starts when the memory is inoculated with a single self-replicating program, the "ancestor", and then left to run on its own. At first the ancestor and its off-spring repeatedly replicate until the available memory space is teeming with creatures which all share the same ancestral genotype. However, since any given machine language creature eventually dies, and since errors (mutations) sometimes occur when a creature replicates, the population of Tierra creatures evolves. Over time the "ecology" of Tierran genotypes becomes remarkably diverse, with the appearance of fitter and fitter genotypes, parasites, and hyper-parasites, among other things.

By simulating specific bottom-up models like Reynold's Boids and Ray's Tierra, the field of artificial life studies how various kinds of global vital phenomena can spontaneously emerge from a web of interactions among simple micro-level agents. By illuminating the minimal conditions sufficient to produce these phenomena, the models help us to understand not only how such phenomena happen in the actual world but also how they could happen in any possible world.

 

II

The philosophical issues affected by artificial life include basic metaphysical questions about fundamental aspects of reality such as life, mind, and emergent phenomena in general; also included are many issues concerning contemporary topics like artificial intelligence and functionalism; and even theoretical ethical questions and matters of practical political policy are affected. Bedau (1992) surveys these issues; this section will illustrate artificial life's influence on the content of philosophical issues by considering two topics: the nature of emergent phenomena, and the significance of the distinctive suppleness of mental phenomena.

Apparent emergent phenomena share two hallmarks: they are somehow constituted by, and generated from, underlying processes, and yet they are also somehow autonomous from those underlying processes. Although these two hallmarks seem inconsistent or, in combination, somehow metaphysically illegitimate, nevertheless there are abundant examples of apparent emergent phenomena, spanning from inanimate self-organization like a tornado to the conscious mental lives embodied in our brains, and including a wealth of phenomena in vital systems. Hence, the perennial puzzle of emergence. A solution to this puzzle would not only explain&emdash;or, better, explain away&emdash;the appearance of illegitimate metaphysics; it would also show that emergent phenomena are a central and constructive part of our understanding of nature, especially in accounts of those phenomena like life and mind that have always seemed to involve emergence. It is significant, then, that a certain kind of emergent phenomena&emdash;what I elsewhere call weak emergence (Bedau forthcoming-b)&emdash;plays a central role in artificial life models of vital phenomena.

The paradigmatic case of weak emergence concerns the macrostates of a system that is composed out of micro-level parts, the number and identity of which typically change over time. The macrostates are structural properties constituted wholly out of the microstates, and a dynamical process governs how the microstates change over time. A system's macrostate is weakly emergent, then, just in case it it can be derived from the system's external conditions (including its initial conditions) and its micro-level dynamical process but only through the process of simulation. (For further details, see Bedau forthcoming-b.) Weak emergence is certainly characteristic of artificial life's bottom-up models, and it seems to be present in virtually all complex systems, both artificial and natural. This ubiquity is what makes weak emergence especially interesting.

Weak emergence differs significantly from the traditional notions of emergence in twentieth century philosophy. For example, since weakly emergent properties can be derived (via simulation) from complete knowledge of micro-level information, from that same information they can be predicted, at least in principle, with complete certainty. Thus, weak emergence diverges from those conceptions of emergence (e.g., Broad 1925, Pepper 1926, Nagel 1961) that depend on in principle unpredictability. Of course, weak emergence does still share some of the flavor of these views, since actually making the predictions involves the a posteriori and contingency-ridden process of simulation.

It is worth noting that weak emergence has the two hallmarks of emergent properties listed earlier. It is quite straightforward how weak emergent phenomena are constituted by, and generated from, underlying processes. The system's macrostates are constituted by its microstates, and the macrostates are entirely generated solely from the system's microstates and microdynamic. At the same time, there is a clear sense in which the behavior of weak emergent phenomena are autonomous with respect to the underlying processes. When artificial life discovers simple, general macro-level patterns and laws involving weak emergent phenomena, there is no evident hope of side-stepping a simulation and deriving these patterns and laws of weak emergent phenomena from the underlying microdynamic (and external conditions) alone. In general, we can formulate and investigate the basic principles of weak emergent phenomena only by empirically observing them at the macro-level. In this sense, then, weakly emergent phenomena have an autonomous life at the macro-level. Now, there is nothing inconsistent or metaphysically illegitimate about underlying processes constituting and generating phenomena that can be derived only by simulation. In this way, weak emergence explains away the appearance of metaphysical illegitimacy.

One might object that weak emergence is too weak to be called "emergent", either because it applies so widely or arbitrarily that it does not demark an interesting class of phenomena, or because it applies to certain phenomena that are not emergent. But this breadth of instances is no flaw, for weak emergence is not designed to capture an intrinsically interesting property. Most macrostates in complex systems are weakly emergent, and most of them have no special importance. In fact, a central scientific challenge in artificial life and related fields is to identify which emergent macrostates are interesting, i.e., which reflect the fundamental qualities of living systems (Bedau and Packard 1992, Bedau 1995).

Weak emergence is the rule in artificial life; complex macro phenomena are constituted and generated by simple micro dynamics, but the micro phenomena involve such a kaleidoscopic array of non-additive interactions that the macro phenomena can be derived only by means of explicit simulations. The central place of weak emergence in this thriving scientific activity provides substantial evidence that weak emergence is philosophically and scientifically important. It is striking that weak emergence is so prominent in scientific accounts of exactly those especially puzzling phenomena in the natural world&emdash;such as those involving life and mind&emdash;that perennially generate sympathy for emergence. Can this be an accident?

It is well known that the emergent dynamical patterns among our mental states are especially difficult to describe and explain precisely. Descriptions of these patterns must be qualified by "ceteris paribus" clauses, as the following example (adapted from Horgan and Tienson 1989) illustrates:

Means-ends reasoning: If X wants goal G and X believes that X can achieve G by performing action A, then ceteris paribus X will do A. For example, if X wants a beer and believes that there is one in the kitchen, then X will go get one&emdash;unless, as the "ceteris paribus" clause signals, X does not want to miss any of the conversation, or X does not want to offend the speaker by leaving in midsentence, or X does not want to drink beer in front of his mother-in-law, or X thinks he should, instead, flee the house since it is on fire, etc.

An analogous open-ended list of exceptions infects descriptions of all analogous mental patterns, for which reason these patterns are sometimes called "soft" (Horgan and Tienson 1990).

There are different kinds of "softness". One kind of softness (emphasized by Fodor 1981, for example) results from malfunctions in the underlying material and processes that implement mental phenomena. Another kind of softness (emphasized by Horgan and Tienson, 1989 and 1990) could result from the indeterminate results of competition among a potentially open-ended range of conflicting desires. But there is a third kind of softness, which I will call "suppleness". Suppleness is involved in a distinctive kind of exceptions to the patterns in our mental lives&emdash;specifically, those exceptions that reflect our ability to act appropriately in the face of an open-ended range of contextual contingencies. These exceptions to the norm occur when we make appropriate adjustment to contingencies. The ability to adjust our behavior appropriately in context is a central component of the capacity for intelligent behavior (Varela, Thompson, and Rosch 1991; Parisi, Nolfi, and Cecconi 1992). Since the suppleness of mental dynamics is crucially involved in their very intelligence, any adequate account of the mind must explain this suppleness.

But the suppleness of mental dynamics itself makes them difficult to describe and explain. If we merely employ "ceteris paribus" clauses (or their equivalent) in our description and explanation, then we cannot specify when ceteris is not paribus, when deviation from the norm is appropriate. The "expert systems" of traditional artificial intelligence illustrate an alternative strategy for accounting for the suppleness of mental processes: predigest the circumstances that give rise to exceptions and then specify (either explicitly or through heuristics) how to cope with them in a manner that is precise enough to be expressed as an algorithm. The problem with these expert systems is that they never supplely respond to an open-ended variety of circumstances (see, e.g., Dreyfus 1979, Hofstadter 1985, Holland 1986). Their behavior is "brittle"; it lacks the sensitivity to context distinctive of intelligence. The nature and central role of suppleness in our mental capacities helps explain why the so-called "frame problem" of artificial intelligence is so important and so difficult to solve.

A third strategy for accounting for supple mental dynamics is to follow the lead set by recent work in artificial life. For there is a similar suppleness in vital processes such as metabolism, adaptation, and even flocking. For example, a flock maintains its cohesion not always but only for the most part, only ceteris paribus, for the cohesion can be broken when the flock flies into an obstacle (like a tree). In such a context, the best way to "preserve" the flock might be for the flock to divide into subflocks. Reynolds Boids model exhibits just this sort of supple flocking behavior. Or consider another example concerning the process of adaptation itself. Successful adaptation depends on the ability to explore an appropriate number of viable evolutionary alternatives; too many or too few can make adaptation difficult or even impossible. In other words, success requires striking a balance between the competing demands for "creativity" (trying new alternatives) and "memory" (retaining what has proved successful). Furthermore, as the context for evolution changes, the appropriate balance between creativity and memory can shift in a way that resists precise and exceptionless formulation. Nevertheless, artificial life models can show a supple flexibility in how they balance creativity and novelty (Bedau and Bahm 1994, Bedau and Seymour 1994). Other emergent artificial life models illustrate other kinds of supple emergent vital dynamics. (For more discussion of these examples, see Bedau forthcoming-a.) Although these examples alone provide no proof that the supple dynamics are due to the models' emergent architecture, the growing empirical evidence continually supports this conclusion.

Artificial life models construe supple dynamics as the emergent macro-level effect of a context-dependent competition for influence in a population of micro-level entities. An analogous model of the mind would construe supple mental dynamics as a macro-level effect which emerges from the aggregate behavior of a micro-level population. In a successful model the emergent macro-level dynamic would correspond well with the supple dynamics of real minds. But let there be no false advertising! My remarks here only begin to suggest what an emergent model of supple mental dynamics would be like. Since the ultimate plausibility of the emergent approach to mental dynamics depends, as one might say, on "putting your model where your mouth is," one might in fairness demand proponents of this approach to start building models. But producing models is not easy; a host of difficult issues must be faced. First, what is the micro-level population from which mental phenomena emerge, and what explicit micro-level dynamics govern it? Second, assuming we have settled on a micro-level population and dynamics, how can we identify the macro-level dynamics of interest? Emergent models generate copious quantities of micro-level information, and this saddles us with a formidable data-reduction problem. Where should "slice" these data to see relevant patterns? Finally, assuming we have a satisfactory solution to the data reduction problem, how can we recognize and interpret any patterns that might appear? We must distinguish real patterns from mere artifacts. The patterns will not come pre-labelled; how are we to recognize any supple mental dynamics that might emerge?

The foregoing difficulties are worth confronting, though, for an emergent account of supple mental dynamics would have important virtues. The account would be precise, just as precise as the emergent model itself. Furthermore, the model would produce an increasingly complete description of a precise set of mental dynamics as it was simulated more and more. Though the model might never completely fill out all the details of the supple pattern, additional simulation could generate as much detail as desired. In addition, the model's account of the supple dynamics would be principled, since one and the same model would generate the supple pattern along with all the exceptions that prove the rule. The success of the emergent models in artificial life argues that emergent models of supple mental dynamics are a promising avenue to explore.

 

III

Philosophers have welcomed new kinds of evidence into their discussions. Details about the contingencies of neurophysiology, for example, inform work on the philosophy of mind (e.g., P. S. Churchland 1986, P. M. Churchland 1989), and treatments of reductionism in biology, to pick another example, advert to detailed discoveries of biological science (e.g., Kitcher 1984 and Waters 1990). We now also find artificial life's computer simulations being imported into philosophy. But what is distinctive about artificial life's impact on philosophy is that its computational methodology is such a direct and natural extension of philosophy's traditional methodology of a priori thought experiments. In the attempt to capture the simple essence of vital processes, artificial life models abstract away from most details of natural living systems without pretending to be accurate models of particular features of particular natural systems (Bedau 1995). These are "idea" models for exploring the consequences of certain simple premises. Artificial life simulations are in effect thought experiments&emdash;but emergent thought experiments. As with the "armchair" thought experiments familiar in philosophy, artificial life simulations attempt to answer "What if X?" questions. What is distinctive about emergent thought experiments is that what they reveal can be discerned only by simulation; armchair analysis in these contexts is simply inconclusive. Synthesizing emergent thought experiments with a computer is a new technique that philosophers can adapt from artificial life.

It will take some time to learn how and when to use emergent thought experiments. When the context is even a little complex, it is all too easy to fall into the fallacy of assuming that something is (or is not) possible because we think we can (or cannot) imagine how it might happen. We can avoid this fallacy by grounding our speculations on empirical evidence about what actually happens when we synthesize the relevant phenomena in an emergent thought experiment. This section illustrates the need for this new methodology.

The progression of evolution in our biosphere seems to show a remarkable overall increase in complexity, from simple prokaryotic one-celled life to eukaryotic cellular life forms with a nucleus and numerous other cytoplasmic structures, then to life forms composed out of a multiplicity of cells, then to large-bodied vertebrate creatures with sophisticated sensory processing capacities, and ultimately to highly intelligent creatures that use language and develop sophisticated technology. (McShea 1996 is a useful caution about the difficulty of quantitatively verifying this change in complexity.) This evidence is consistent with the hypothesis that open-ended evolutionary processes have an inherent, law-like tendency to create creatures with increasingly complicated functional organization. Just as the arrow of entropy in the second law of thermodynamics asserts that the entropy in all physical systems has a general tendency to increase with time, the hypothesis of the arrow of complexity asserts that the complex functional organization of the most complex products of open-ended evolutionary systems has a general tendency to increase with time.

The fact that the evolution of life is consistent with the arrow of complexity hypothesis does not establish the truth of the hypothesis, of course, and nobody more vigilantly guards against any echo of the idea of evolution as a march of progress than Stephen Jay Gould. His book (1989) on the fossils in the Burgess Shale spectacularly reinterprets the evolution of life as a process devoid of any global progression like an arrow of complexity. The book's central argument is that any evolutionary progression evident in our biosphere is merely a contingent by-product of myriad accidents frozen into the evolutionary record. Although the course of life has an historical explanation, Gould thinks that no deeper, law-like tendency is implicated.

Historical explanations take the form of a narrative: E, the phenomenon to be explained, arose because D came before, preceded by C, B, and A. . . . Thus, E makes sense and can be explained rigorously as the outcome of A through D. But no law of nature enjoined E; any variant E´ arising from an altered set of antecedents, would have been equally explicable, though massively different in form and effect.

I am not speaking of randomness (for E had to arise, as a consequence of A through D), but of the central principle of all history&emdash;contingency. (1989, p. 283, emphasis in original)

Gould thinks that the contingency of historical processes like the evolution of the biosphere debars general laws like the hypothesized arrow of complexity. The results of historical processes "do not arise as deducible consequences from any law of nature; they are not even predictable from any general or abstract property of the larger system...." (p. 284). Instead, "almost every interesting event of life's history falls into the realm of contingency" (p. 290).

Gould illustrates his argument with the thought experiment of replaying the tape of life, that is, rewinding the evolutionary process backward in time and they replaying it again forward in time but allowing different accidents, different contingencies to reshape the evolution of life.

I call this experiment "replaying life's tape." You press the rewind button and, making sure you thoroughly erase everything that actually happened, go back to any time and place in the past&emdash;say, to the seas of the Burgess Shale. Then let the tape run again and see if the repetition looks at all like the original. If each replay strongly resembles life's actual pathway, then we must conclude that what really happened pretty much had to occur. But suppose that the experimental versions all yield sensible results strikingly different from the actual history of life? What could we then say about the predictability of self-conscious intelligence? or of mammals? or of vertebrates? or of life on land? or simply of multicellular persistence for 600 million years? (pp. 48-50).

Gould is confident that his thought experiment disproves anything like an arrow of complexity, for "any replay of the tape would lead evolution down a pathway radically different from the road actually taken" (p. 51).

But does Gould's thought experiment really show this? Replaying life's tape is a wonderful "crucial" experiment for testing the arrow of complexity hypothesis, but Gould has no good ground for prognostications about what replaying the tape would show. In fact, he shows no interest in pursuing his thought experiment constructively, though this is exactly the sort of investigation that is pursued in the field of artificial life. With an appropriate computer model of open-ended evolution, you can rerun the tape of life to your heart's content. The detailed course of evolution in each case would depend on the history of accidents unique to each run, but a general pattern unifying these contingencies still might emerge. Judicious analysis of the mass of contingencies collected from repeated simulations of the model would reveal whether an arrow of complexity is lurking, and this would finally provide an appropriate context within which to understand the progressive complexity of the actual biosphere.

Make no mistake: nobody has yet actually conducted Gould's thought experiment. In fact, it is not obvious how to do the experiment because it is unclear how to design a system that exhibits the kind of open-ended evolution characteristic of our biosphere. One of the on-going research efforts in the field of artificial life is the pursuit of exactly this goal. As long as this experiment remains in the future all guesses about its outcome&emdash;including Gould's&emdash;will remain inconclusive. Actually conducting the experiment will mark when the discussion moves beyond mere verbal speculation. We can finally discern the global pattern (if any) inherent in the process of open-ended evolution only by creating and empirically observing the relevant emergent thought experiments.

It is hard to avoid the fallacy of putting too much stock in our a priori intuitions when contemplating complex systems. Where Gould assumes that the contingencies of evolution preclude an arrow of complexity, Daniel Dennett (1995) assumes that evolution by natural selection can explain human concerns like mind, language, and morals. But Dennett's assumption is only an article of faith. He never attempts to construct an evolutionary explanation for mind, language, and morality; he never "puts his model where his mouth is" and checks whether natural selection really could explain these phenomena, even in principle. There is little doubt that the explanation of mind, language and morals, when discovered, will be consistent with natural selection&emdash;just as natural selection is consistent with quantum mechanics. But this does not show that natural selection, any more than quantum mechanics, plays an important role in the explanation of mind, language and morals. When Dennett claims that natural selection does explain them, he's only guessing, just as Hobbes was only guessing when he claimed that they could be explained by corpuscular mechanics. Maybe natural selection can explain them, maybe it can't; we just don't know yet.

The way to show what phenomena natural selection can produce is to use natural selection to synthesize the phenomena in an emergent thought experiment. These thought experiments introduce discipline into the discussion. We can gain confidence that we understand how to explain some phenomenon when we can synthesize it in a plausible model, and when we are unable to do this we reveal our ignorance.

 

IV

The new interdisciplinary field of artificial life has many implications for philosophy. Not only will artificial life influence progress on a host of fundamental philosophical issues, but the field's emergent thought experiments will change the practice of philosophy. Artificial life and philosophy are natural partners. Both seek to understand phenomena at a level of generality that is sufficiently deep to reveal essential natures. Furthermore, the pursuit of this generality drives both to use the method of though experiments. But while philosophers have traditionally conducted their thought experiments while meditating in their armchairs, the complexity of situations contemplated in artificial life forces investigators to conduct their thought experiments at the computer. The same can be said with increasing frequency for philosophical thought experiments about issues like emergence, life, and mind. A new level of clarity, precision, and evidence will follow when philosophers adapt artificial life's computational methodology of emergent thought experiments. A constructive test of our understanding of a complex phenomenon is the success to which we can "put our model where our mouth is" in an emergent thought experiment. Artificial life has been doing this, and philosophy is now starting to follow this example.

 

References

Bedau, Mark. Forthcoming-a. "Emergent Models of Supple Dynamics in Life and Mind." Brain and Cognition.

Bedau, Mark. Forthcoming-b. "Weak Emergence." In James Tomberlin, ed., Philosophical Perspectives, Vol. 11, New York: Basil Blackwell Publishers.

Bedau, Mark. 1996a. "The Nature of Life." In Boden, M., ed., The Philosophy of Artificial Life, Oxford: Oxford University Press., pp. 332-357.

Bedau, Mark. 1996b. "The Extent to which Organisms Construct their Environments." Adaptive Behavior 4: 469-475.

Bedau, Mark. 1995. "Three Illustrations of Artificial Life's Working Hypothesis." In Wolfgang Banzhaf and Frank Eeckman, eds., Evolution and Biocomputation, Berlin: Springer-Verlag, pp. 53-68.

Bedau, Mark. 1992. "Philosophical Aspects of Artificial Life." In F. Varela and P. Bourgine, eds., Towards A Practice of Autonomous Systems, Cambridge: Bradford Books/MIT Press, 494-503.

Bedau, Mark, and Bahm, Alan. 1994. "Bifurcation Structure in Diversity Dynamics." In R. Brooks and P. Maes, eds., Artificial Life IV, Cambridge: Bradford Books/MIT Press, 258-268.

Bedau, Mark, and Seymour, Robert. 1994. "Adaptation of Mutation Rates in a Simple Model of Evolution." In Russel J. Stonier and Xing Huo Yu, eds., Complex Systems&emdash;Mechanism of Adaptation, Amsterdam: IOS Press, 37-44.

Bedau, Mark, and Packard, Norman. 1992. "Measurement of Evolutionary Activity, Teleology, and Life." In C. Langton, C. Taylor , D. Farmer, and S. Rasmussen, eds., Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, Vol. X, Redwood City: Addison-Wesley, pp. 431-461.

Boden, Margaret, ed. 1996. The Philosophy of Artificial Life, Oxford: Oxford University Press.

Broad, C. D. 1925. The Mind and Its Place in Nature. London: Routledge and Kegan Paul.

Churchland, Patricia Smith. 1986. Neurophilosophy: Toward a Unified Science of the Mind/Brain, Cambridge: Bradford Books/MIT Press.

Churchland, Paul M. 1989. A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, Cambridge: Bradford Books/MIT Press.

Crutchfield, J.P., Farmer, J.D., Packard, N.H., and Shaw, R. S. 1986. "Chaos." Scientific American 255 (December): 46-57.

Dennett, Daniel. 1995. Darwin's Dangerous Idea: Evolution and the Meanings of Life, Simon and Schuster: New York.

Dreyfus, H. 1979. What Computers Cannot Do, Second edition, New York: Harper and Row.

Fodor, J. A. 1981. "Special Sciences." In his Representations, Cambridge: Bradford Books/MIT Press.

Gould, Stephen Jay. 1989. Wonderful Life: The Burges Shale and the Nature of History, New York: Norton.

Hofstadter, D. R. 1985. "Waking Up from the Boolean Dream, or, Subcognition as Computation." In his Metamagical Themas: Questing for the Essence of Mind and Pattern, New York: Basic Books, pp. 631-665.

Holland, J. H. 1986. "Escaping Brittleness: The Possibilities of General-Purpose Learning Algorithms applied to Parallel Rule-Based Systems." In R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, eds., Machine Learning II, Los Altos: Morgan Kaufmann.

Holland, J. H. 1992. Adaptation in Natural and Artificial Systems, 2nd edition, Cambridge: Bradford Books/MIT Press

Horgan, T., and Tienson, J., eds. 1991. Connectionism and the Philosophy of Mind, Dordrecht: Kluwer Academic.

Horgan, T., and Tienson, J. 1989. "Representation without Rules." Philosophical Topics 17, 147-174.

Horgan, T., and Tienson, J. 1990. "Soft Laws." Midwest Studies in Philosophy 15, 256-279.

Kellert, S. H. 1993. In the Wake of Chaos: Unpredictable Order in Dynamical Systems, Chicago: The University of Chicago Press.

Kitcher, Philip. 1984. "1953 and All That: A Tale of Two Sciences." Philosophical Review 93: 335-373.

Langton, C. 1989. Artificial Life, in C. Langton, ed., Artificial Life, Redwood City: Addison-Wesley, pp. 1-47.

Langton, C. 1992. "Life at the Edge of Chaos." In C. Langton, C. Taylor , D. Farmer, and S. Rasmussen, eds., Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, Vol. X, Redwood City: Addison-Wesley, pp. 41-91.

McShea, Daniel W. 1996. "Metazoan complexity and evolution: Is there a trend?" Evolution 50, 477-492

Nagel, E. 1961. The Structure of Science, New York: Harcourt, Brace & World.

Parisi, D., Nolfi, N., and Cecconi, F. 1992. "Learning, Behavior, and Evolution." In F. Varela and P. Bourgine, eds., Towards a Practice of Autonomous Systems, Cambridge: Bradford Books/MIT Press, pp. 207-216.

Pepper, S., 1926. "Emergence." Journal of Philosophy 23: 241-245.

Ray, T. 1992. "An Approach to the Synthesis of Life." In C. Langton, C. Taylor , D. Farmer, and S. Rasmussen, eds., Artificial Life II, Santa Fe Institute Studies in the Sciences of Complexity, Vol. X, Redwood City: Addison-Wesley, pp. 371-408.

Reynolds, C. W. 1987. "Flocks, Herds, and Schools: A Distributed Behavioral Model." Computer Graphics 21, 25-34.

Reynolds, C. W. 1992. Boids Demos. In C. Langton, ed., Artificial Life II Video Proceedings, Redwood City: Addison-Wesley, pp. 15-19.

Rumelhart, D. E., and McClelland, J. L. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 2 Vols., Cambridge: Bradford Books/MIT Press.

Sober, E. 1994. "The Adaptive Advantage of Learning versus A Priori Prejudice." In From a Biological Point of View, Cambridge: Cambridge University Press.

Stone, M. A. 1989. "Chaos, Prediction, and Laplacean Determinism." American Philosophical Quarterly 26: 123-131.

Varela, F., Thompson, E., and Rosch, E. 1991. The Embodied Mind: Cognitive Science and Human Experience, Cambridge: Bradford Books/MIT Press.

Waters, C. Kenneth. 1990. "Why the Anti-reductionist Consensus Won't Survive the Case of Classical Mendelian Genetics." PSA 1990,Vol. 1, Proceedings of the PSA, East Lansing, pp. 125-139.

Wolfram, S. 1994. Cellular Automata and Complexity. Reading: Addison-Wesley.