Emergent Models of Supple Dynamics in Life and Mind
(4 figures missing in this version)
Mark A. Bedau
Reed College, 3203 SE Woodstock Blvd., Portland OR 97202, USA
Voice: (503) 771-1112, ext. 7337; Fax: (503) 777-7769
Email: mab@reed.edu
Web: http://www.reed.edu/~mab
Abstract
The dynamical patterns in mental phenomena have a characteristic suppleness&emdash;a looseness or softness that persistently resists precise formulation&emdash;which apparently underlies the frame problem of artificial intelligence. This suppleness also undermines contemporary philosophical functionalist attempts to define mental capacities. Living systems display an analogous form of supple dynamics. However, the supple dynamics of living systems have been captured in recent artificial life models, due to the emergent architecture of those models. This suggests that analogous emergent models might be able to explain supple dynamics of mental phenomena. These emergent models of the supple mind, if successful, would refashion the nature of contemporary functionalism in the philosophy of mind.
1. Questions about Dynamics in Mind and Life
Pattern permeates the dynamics of our mental lives. Our beliefs and desires arise, evolve, and decay, for example, in relationship with our experiences, our other mental states, and our behavior, in more or less regular ways. These dynamical patterns are real even though they are not mechanical or exceptionless, even though they hold only ceteris paribus, that is, only if everything else is equal. And we are all at least roughly familiar with the overall shape of these global mental patterns; in fact, their familiarity to ordinary folk has lead them to be called a "folk theory" of the mind (Churchland, 1981).
There is no question that these patterns are an important facet of the mind. In fact, functionalism — the dominant position in contemporary philosophy of mind — uses these very patterns to define what it is to have a mind. Still, deep questions about the nature and status of these patterns remain open. I contend in this paper that these patterns are difficult to describe and explain because of a special quality, explained below, which I call "suppleness". One typical sign of this suppleness is that the patterns can be adequately described only by employing "ceteris paribus" clauses or their equivalent. Since functionalism defines the mind in terms of the patterns exhibited by mental phenomena, functionalism needs a way to describe and explain the suppleness of mental dynamics.
Patterns in the dynamics of living systems exhibit a similar sort of suppleness. Thus, suppleness in living systems might help illuminate supple mental dynamics, especially given that recent simulations in the new interdisciplinary science of artificial life provide strikingly plausible explanations some of life's supple dynamics. Two examples are the dynamics of flocking and of evolution. Reynold's model of flocking is one of the simplest and most widely known artificial life model, and its supple flocking dynamics is especially vivid. Packard's model of evolution is more complex because the microdynamics underlying it evolves over time, but the supple dynamics in Packard's model are more typical of work in artificial life. Both models are discussed below.
These two models illustrate how the distinctive emergent architecture of artificial life models accounts for the supple dynamics of life. The next section, section 2, introduces Reynold's model of flocking, explains the key concepts of supple dynamics and emergent models, and then uses Reynold's model to illustrate both concepts. Section 3 introduces Packard's model of evolving life forms, and this is used to provide a more detailed example of supple dynamics and to show again how an emergent model explains this suppleness. Section 4 shows that mental dynamics exhibits an analogous form of suppleness. The success of emergent models of supple dynamics in artificial life suggests that we might be able to explain the mind's supple dynamics if we could devise models of mental phenomena with an analogous emergent architecture. Section 4 explores what these emergent models of supple mental dynamics would look like. Finally, section 5 shows how we can use this idea of an emergent model of supple mental dynamics to devise an emergent form of functionalism which does justice to the mind's supple dynamics.
2. Reynold's Emergent Model of Supple Flocking
Flocks of birds exhibit impressive macro-level behavior. One can easily recognize patterns or regularities in global flocking behavior. Collecting and categorizing these regularities of flocking behavior yields a folk theory of flocking, analogous to folk theories of mental dynamics. The most obvious regularity of flocking is simply that the flock exists at all. While the individual birds fly this way and that, at the global level a flock organizes and persists. The flock maintains its cohesion while moving ahead, changing direction, or negotiating obstacles. These global patterns are especially impressive since they are achieved without any global control. No individual bird is issuing flight instructions to the rest of the flock; no central authority is even aware of the global state of the flock. The global behavior is simply the aggregate effect of the microcontingencies of individual bird trajectories.
I want to call attention to one particular feature of global-level flocking regularities — its suppleness. This suppleness is a certain kind of fluidity or softness in the regularities. For example, flocks maintain their cohesion not always but only for the most part, only ceteris paribus. The fact that we need to use "ceteris paribus" clauses or their equivalent to describe these regularities is one clue that the regularities are supple. The suppleness of the flock cohesion regularity is associated with the kind of exceptions that the regularity has. Sometimes the flock cannot maintain its cohesion because the wind is too strong (or predators are too plentiful, or the birds are too hungry, etc). Other times the flock cohesion is broken because the flock flies into an obstacle (like a tree) and splits into two subflocks. Such flock splitting especially reveals flocking's suppleness, for flock splitting is an exception that proves the rule that flocks maintain their cohesion. Under the circumstances, the best way for the birds to serve the underlying purposes of flocking is for them to split into two subflocks each of which then preserves its own cohesion. Thus, in these circumstances, splitting actually reflects and serves the underlying goals that lead to the flock cohesion rule, while slavishly preserving the flock's cohesion would have violated those goals.
In general, supple regularities share two features. First, the regularities have exceptions; they cannot be expressed as precise and exceptionless regularities. This open texture is often reflected in formulations of supple regularities by the use of "ceteris paribus" clauses (or some similar phrase). Second, some of the regularity's exceptions prove the rule; that is, they are appropriate in the context since they achieve the system's underlying goals better than slavishly following the rule would have, and they occur because they are appropriate. These exceptions that prove the rule reflect an underlying capacity to respond appropriately to an unpredictable variety of contingencies.
That flocking consists of supple dynamical regularities is obvious enough once you look for it. It is interesting not because it is surprising but because of its implications for how to model flocking. Consider first what I call "brute force" models of flocking. In a brute force flocking model, each bird's moment-to-moment trajectory in principle is affected by the behavior of every other bird in the flock, i.e., by the global state of the flock. An illustration of this kind of model (in a slightly different context) is the computer animation method used in the Star Wars movies. In Star Wars we see computer animation not of bird flocks but of fleets of futuristic spaceships. Those sequences of computer animated interstellar warfare between different fleets of spaceships were produced by human programmers carefully scripting each frame, positioning each ship at each moment by reference to its relationship with (potentially) every other ship in the fleet. In other words, the programmer, acting like a God, is omniscient and omnipotent about the fleet's global state and uses this information to navigate each ship.
This brute force modelling approach has two important consequences. The first is that the behavior of the fleet seems a bit rigid or scripted; it does not look entirely natural to the eye. This effect is not surprising, since producing natural fleet behavior requires the programmer-as-God to properly anticipate the contingent effects of minute adjustments in individual ship trajectories. In principle, the programmer can make the fleet behavior incrementally more natural by adjusting individual trajectories; in practice, the programming time required grows prohibitively. (I have heard that the computer animation in Star Wars were the most expensive minutes of film ever produced.) Second, if the size of the fleet grows, the computational expense of the brute force modelling approach again grows prohibitively. Adding even one more ship in principle can require adjusting the behavior of every other ship in the fleet. In other words, the brute force model succumbs to a combinatorial explosion, and so is not feasibly computable.
Brute force models contrast with what I will call "emergent" models, which are nicely illustrated by Reynolds's (1987, 1992) celebrated model of flocking "boids". When one views Reynold's flocking demos, one is vividly struck by how natural the flocking behavior seems. The boids spontaneously organize into a flock that then maintains its cohesion as it moves and changes direction and negotiates obstacles, fluidly flowing through space and time. The flock is a loosely formed group, so loose that individual boids sometimes lose contact with the rest of the flock and fly off on their own, only to rejoin the flock if they come close enough to the flock's sphere of influence. The flock appropriately adjusts its spatial configuration and motion in response to internal and external circumstances. For example, the flock maintains its cohesion as it follows along a wall; also, the flock splits into two subflocks if it runs into a column, and then the two subflocks will merge back into one when they have flown past the column. These dynamical flocking regularities are supple in the sense that their precise form varies in response to contextual contingencies (the angle of the wall, the shape and distribution of the columns, etc.) so that the flock automatically adjusts its behavior in a way that is appropriate given these changing circumstances.
The boids model produces these natural, supple flocking dynamics as the emergent aggregate effect of micro-level boid activity. No entity in the boids model has any information about the global state of the flock, and no entity controls boid trajectories with global state information. No boid issues flight plans to the other boids. No programmer-as-God scripts specific trajectories for individual boids. Instead, each individual boid's behavior is determined by three simple rules that key off of a boid's neighbors: seek to maintain a certain minimum distance from nearby boids, seek to match the speed and direction of nearby boids, and seek to steer toward the center of gravity of nearby boids. (In addition, boid's seek to avoid colliding with objects in the environment and are subject to the laws of physics.)
In order to appreciate in what sense the boids model is emergent, note that it consists of a micro-level and a macro-level. I should stress that I am using "micro" and "macro" in a generalized sense. Micro-level entities need not be literally microscopic; birds are not. "Micro" and "macro" are relative terms; an entity exists at a micro-level relative to a macro-level population of similar micro-level entities. These levels can be nested. Relative to a flock, an individual bird is a micro-level entity; but an individual bird is a macro-level object relative to the micro-level genetic elements (say) that determine the bird's behavioral proclivities.
The boids model is emergent, in the sense intended here, because of the way in which they generate complex macro-level dynamics from simple micro-level mechanisms. This form of emergence arises in contexts in which there is a system, call it S, composed out of "micro-level" parts. The number and identity of these parts might change over time. S has various "macro-level" states (macrostates) and various "micro-level" states (microstates). S's microstates are the states of its parts. S's macrostates are structural properties constituted wholly out of microstates; macrostates typically are various kinds of statistical averages over microstates. Further, there is a relatively simple and implementable microdynamic, call it D, which governs the time evolution of S's microstates. In general, the microstate of a given part of the system at a given time is a result of the microstates of "nearby" parts of the system at preceding times. Given these assumptions, I will say that a macrostate P of system S with microdynamic D is emergent if and only if P (of system S) can be explained from D, given complete knowledge of external conditions, but P can be predicted (with complete certainty) from D only by simulating D, even given complete knowledge of external conditions. So, we can say that a model is emergent if and only if its macrostates are emergent in the sense just defined.
Although this is not the occasion to develop and defend this concept of emergence (see Bedau, 1996a), I should clarify three things. First, "external conditions" are conditions affecting the system's microstates that are extraneous to the system itself and its microdynamic. One kind of external condition is the system's initial condition. If the system is open, then another kind of external condition is the contingencies of the flux of parts and states into S. If the microdynamic is nondeterministic, then each nondeterministic effect is another external condition.
Second, given the system's initial condition and other external conditions, the microdynamic completely determines each successive microstate of the system. And the macrostate P is a structural property constituted out of the system's microstates. Thus, the external conditions and the microdynamic completely determine whether or not P obtains. In this specific sense, the microdynamic plus the external conditions "explain" P. One must not expect too much from these explanations. For one thing, the explanation depends on the massive contingencies in the initial conditions. It is awash with accidental information about S's parts. Furthermore, the explanation might be too detailed for anyone to "survey" or "grasp". It might even obscure a simpler, macro-level explanation that unifies systems with different external conditions and different microdynamics. Nevertheless, since the microdynamic and external conditions determine P, they explain P.
Third, in principle we can always predict S's behavior with complete certainty, for given the microdynamic and external conditions we can always simulate S as accurately as we want. Thus, the issue is not whether S's behavior is predictable — it is, trivially — but whether we can predict S's behavior only by simulating S. When trying to predict a system's emergent behavior, in general one has no choice but simulation. This notion of predictability only through simulation is not anthropocentric; nor is it a product of some specifically human cognitive limitation. Even a Laplacian supercalculator would need to observe simulations to discover a system's emergent macrostates.
In the case of the boids model, individual boids are micro-level entities, and a boid flock is a macro-level entity constituted wholly by an aggregate of micro-level boids. Aside from the programmer's direct control over a few features of the environment (placement of walls, columns, etc), the model's explicit dynamics govern only the local behavior of the individual boids; the explicit model is solely microdynamical. Each boid acts independently in the sense that its behavior is determined solely by following the imperatives of its own internal rules. (Of course, all boids have the same internal rules, but each boid applies the rules in a way that is sensitive to the contingencies of its own immediate environment.) An individual boid's dynamical behavior affects and is affected by only certain local features of its environment — nearby boids and other nearby objects such as walls and columns. The boids model contains no explicit directions for flock dynamics. The flock behavior consists of the aggregated individual boid trajectories and the flock's implicit macro-level dynamics are constituted out of the boid's explicit micro-level dynamics. The flock dynamic is emergent in our sense because, although it is constituted solely by the micro-level dynamics, it can be studied and understood in detail only empirically, through simulations.
3. Packard's Emergent Model of Supple Adaptation
Evolving life forms display various macro-level patterns on an evolutionary time scale. For example, advantageous traits that arise through mutations tend, ceteris paribus, to persist and spread through the population. Furthermore, organisms' traits tend, within limits and ceteris paribus, to adapt to changing environmental contingencies. Of course, these patterns are not precise and exceptionless universal generalizations; they are vague generalities that hold only for the most part. Some of this vagueness is due to context-dependent fluctuations in what is appropriate. In those cases, the macro-level evolutionary dynamics are supple, in the sense intended here. These sorts of supple dynamics of adaptation result not from any explicit macro-level control (e.g., God does not adjust allele frequencies so that creatures are well adapted to their environment); rather, they emerge statistically from the micro-level contingencies of natural selection.
Norman Packard devised a simple model of evolving sensorimotor agents which demonstrates how these sorts of supple, macro-level evolutionary dynamic can emerge implicitly from an explicit microdynamical model (Packard, 1989; Bedau and Packard, 1992; Bedau, Ronneburg, and Zwick, 1992; Bedau and Bahm, 1994; Bedau, 1994; Bedau and Seymour, 1994; Bedau, 1995). What motivates this model is the view that evolving life is typified by a population of agents whose continued existence depends on their sensorimotor functionality, i.e., their success at using local sensory information to direct their actions in such a way that they can find and process the resources they need to survive and flourish. Thus, information processing and resource processing are the two internal processes that dominate the agents' lives, and their primary goal — whether they know this or not — is to enhance their sensorimotor functionality by coordinating these internal processes. Since the requirements of sensorimotor functionality may well alter as the context of evolution changes, continued viability and vitality requires that sensorimotor functionality can adapt in an open-ended, autonomous fashion. Packard's model attempts to capture an especially simple form of this open-ended, autonomous evolutionary adaptation.
The model consists of a finite two-dimensional world with a resource field and a population of agents. An agent's survival and reproduction is determined by the extent to which it finds enough resources to stay alive and reproduce, and an agent's ability to find resources depends on its sensorimotor functionality — that is, the way in which the agent's perception of its contingent local environment affects its behavior in that environment. An agent's sensorimotor functionality is encoded in a set of genes, and these genes can mutate when an agent reproduces. Thus, on an evolutionary time scale, the process of natural selection implicitly adapts the population's sensorimotor strategies to the environment. Furthermore, the agents' actions change the environment because agents consume resources and collide with each other. This entails that the mixture of sensorimotor strategies in the population at a given moment is a significant component of the environment that affects the subsequent evolution of those strategies. Thus, the "fitness function" in Packard's model — what it takes to survive and reproduce — is constantly buffeted by the contingencies of natural selection and unpredictably changes (Packard, 1989).
All macro-level evolutionary dynamics produced by this model ultimately are the result of an explicit micro-level microdynamic acting on external conditions. The model explicitly controls only local micro-level states: resources are locally replenished, an agent's genetically encoded sensorimotor strategy determines its local behavior, an agent's behavior in its local environment determines its internal resource level, an agent's internal resource level determines whether it survives and reproduces, and genes randomly mutate during reproduction. Each agent is autonomous in the sense that its behavior is determined solely by the environmentally-sensitive dictates of its own sensorimotor strategy. On an evolutionary time scale these sensorimotor strategies are continually refashioned by the historical contingencies of natural selection. The aggregate long-term behavior of this microdynamic generates macro-level evolutionary dynamics only as the indirect product of an unpredictably shifting agglomeration of directly controlled micro-level events (individual actions, births, deaths, mutations). Many of these evolutionary dynamics are emergent; although constituted and generated solely by the micro-level dynamic, they can be derived only through simulations. I will illustrate these emergent dynamics with some recent work concerning the evolution of evolvability (Bedau and Seymour, 1994).
The ability to adapt successfully depends on the availability of viable evolutionary alternatives. An appropriate quantity of alternatives can make evolution easy; too many or too few can make evolution difficult or even impossible. For example, in Packard's model, the population can evolve better sensorimotor strategies only if it can "test" sufficiently many sufficiently novel strategies; in short, the system needs a capacity for evolutionary "innovation." At the same time, the population's sensorimotor strategies can adapt to a given environment only if strategies that prove beneficial can persist in the gene pool; in short, the system needs a capacity for evolutionary "memory."
Perhaps the simplest mechanism that simultaneously affects both memory and innovation is the mutation rate. The lower the mutation rate, the greater the number of genetic strategies "remembered" from parents. At the same time, the higher the mutation rate, the greater the number of "innovative" genetic strategies introduced with children. Successful adaptability requires that these competing demands for memory and innovation be suitably balanced. Too much mutation (not enough memory) will continually flood the population with new random strategies; too little mutation (not enough innovation) will tend to freeze the population at arbitrary strategies. Successful evolutionary adaptation requires a mutation rate suitably intermediate between these extremes. Furthermore, a suitably balanced mutation rate might not remain fixed, for the balance point could shift as the context of evolution changes.
One would think, then, that any evolutionary process that could continually support evolving life must have the capacity to adapt automatically to this shifting balance of memory and innovation. So, in the context of Packard's model, it is natural to ask whether the mutation rate that governs first-order evolution could adapt appropriately by means of a second-order process of evolution. If the mutation rate can adapt in this way, then this model would yield a simple form of the evolution of evolvability and, thus, might illuminate one of life's fundamental prerequisites.
Previous work (Bedau and Bahm, 1994) with fixed mutation rates in Packard's model revealed two robust effects. The first effect was that the mutation rate governs a phase transition between genetically "ordered" and genetically "disordered" systems. When the mutation rate is too far below the phase transition, the whole gene pool tends to remain "frozen" at a given strategy; when the mutation rate is significantly above the phase transition, the gene pool tends to be a continually changing plethora of randomly related strategies. The phase transition itself occurs over a critical band in the spectrum of mutation rates, m, roughly in the range 10-3 <= m <= 10-2. The second effect was that evolution produces maximal population fitness when mutation rates are around values just below this transition. Apparently, evolutionary adaptation happens best when the gene pool tends to be "ordered" but just on the verge of becoming "disordered."
In the light of our earlier suppositions about balancing the demands for memory and innovation, the two fixed-mutation-rate effects suggest the balance hypothesis that the mutation rates around the critical transition between genetic "order" and "disorder" optimally balance the competing evolutionary demands for memory and innovation. We can shed some light on the balance hypothesis by modifying Packard's model so that each agent has an additional gene encoding its personal mutation rate. In this case, two kinds of mutation play a role when an agent reproduces: (i) the child inherits its parent's sensorimotor genes, which mutate at a rate controlled by the parent's personal (genetically encoded) mutation rate; and (ii) the child inherits its parent's mutation rate gene, which mutates at a rate controlled by a population-wide meta-mutation rate. Thus, first-order (sensorimotor) and second-order (mutation rate) evolution happen simultaneously. So, if the balance hypothesis is right and mutation rates at the critical transition produce optimal conditions for sensorimotor evolution because they optimally balance memory and innovation, then we would expect second-order evolution to drive mutation rates into the critical transition. It turns out that this is exactly what happens.
Figure 1 shows four examples of how the distribution of mutation rates in the population change over time under different conditions. As a control, distributions (a) and (b) show what happens when the mutation rate genes are allowed to drift randomly: the bulk of the distribution wanders aimlessly. By contrast, distributions (c) and (d) illustrate what happens when natural selection affects the mutation rate genes: the mutation rates drop dramatically. The meta-mutation rate is lower in (a) than in (b) and so, as would be expected, distribution (a) is narrower and changes more slowly. Similarly, the meta-mutation rate is lower in (c) than in (d), which explains why distribution (c) is narrower and drops more slowly.
*** Figure 1 about here ***If we examine lots of simulations and collect suitable macrostate information, we notice the pattern predicted by the balance hypothesis: Second-order evolution tends to drive mutation rates down to the transition from genetic disorder to genetic order, increasing population fitness in the process. This pattern is illustrated in Figure 2, which shows time series data from a typical simulation. The macrostates depicted in Figure 2 are (from top to bottom): (i) the mutation rate distribution, as in Figure 1; (ii) a blow up of the mutation rate distribution that allows us to distinguish very small mutation rates (bins decrease in size by a factor of ten, e.g., the top bin shows mutation rates between 10-0 and 10-1, the next bin down shows mutation rates between 10-1 and 10-2, etc.); (iii) the mean mutation rate (note the log scale); (iv) the uningested resources in the environment; (v) three aspects of the genetic diversity in the population's sensorimotor strategies; and (vi) the population level.
*** Figure 2 about here ***The composite picture provided by Figure 2 can be crudely divided into three epochs: an initial period of (relatively) high mutation rates, during the time period 0 &endash; 20,000; a transitional period of falling mutation rates, during the time period 20,000 &endash; 40,000; and a final period of relatively low mutation rates, throughout the rest of the simulation. The top three time series are different perspectives on the falling mutation rates, showing that the mutation rates adapt downwards until they cluster around the critical transition region, 10-3 <= m <= 10-2. Since resources flow into the model at a constant rate and since survival and reproduction consume resources, the uningested resource inversely reflects the population fitness. We see that the population becomes more fit (i.e., more efficiently gathers resources) at the same time as the mutation rates drop. Although this is not the occasion to review the different ways to measure the diversity of the sensorimotor strategies in the population, we can easily recognize that there is a significant qualitative difference between the diversity dynamics in the initial and final epochs. In fact, these qualitative differences are characteristic of precisely the difference between a "disordered" gene pool of randomly related strategies and a gene pool that is at or slightly below the transition between genetic order and disorder (see Bedau and Bahm, 1994; Bedau 1995).
If the balance hypothesis is the correct explanation of this second-order evolution of mutation rates into the critical transition, then we should be able to change the mean mutation rate by dramatically changing where memory and innovation are balanced. And, in fact, the mutation rate does rise and fall along with the demands for evolutionary innovation. For example, when we randomize the values of all the sensorimotor genes in the entire population so that every agent immediately "forgets" all the genetically stored information learned by its genetic lineage over its entire evolutionary history, the population must restart its evolutionary learning job from scratch. It has no immediate need for memory (the gene pool contains no information of proven value); instead, the need for innovation is paramount. Under these conditions, we regularly observe the striking changes illustrated around timestep 333,333 in Figure 3. The initial segment (timesteps 0 &endash; 100,000) in Figure 3 shows a mutation distribution evolving into the critical mutation region, just as in Figure 2 (but note that the time scale in Figure 3 is compressed by a factor of five). But at timestep 333,333 an external "act of God" randomly scrambles all sensorimotor genes of all living organisms. At just this point we can note the following sequence of events: (a) the residual resource in the environment sharply rises, showing that the population has become much less fit; (b) immediately after the fitness drop the mean mutation rate dramatically rises as the mutation rate distribution shifts upwards; (c) by the time that the mean mutation rate has risen to its highest point the population's fitness has substantially improved; (d) the fitness levels and mutation rates eventually return to their previous equilibrium levels.
*** Figure 3 about here ***All of these simulations show the dynamics of the mutation rate distribution adjusting up and down as the balance hypothesis would predict. Temporarily perturbing the context for evolution can increase the need for rapid exploration of a wide variety of sensorimotor strategies and thus dramatically shift the balance towards the need for innovation. Then, subsequent sensorimotor evolution can reshape the context for evolution in such a way that the balance shifts back towards the need for memory. This all suggests that, ceteris paribus, mutation rates adapt so as to balance appropriately the competing evolutionary demands for memory and innovation, and that, ceteris paribus, this balance point is at the genetic transition from order to disorder. An indefinite variety of environmental contingencies can shift the point at which the evolutionary need for memory and innovation are balanced, and the perturbation experiments show how mutation rates can adapt up or down as appropriate. This supple flexibility in the dynamics of the evolution of evolvability is the deep reason why the principle that, on the whole, mutation rates adapt as appropriate will resist any precise and exceptionless formulation.
This sort of supple adaptability in Packard's model can be counted among the hallmarks of life in general (Maynard Smith 1975, Cairns-Smith 1985, Bedau 1996b). And, clearly, these evolutionary dynamics are emergent. The model's macro-level dynamic is wholly constituted and generated by its micro-level phenomena, but the micro-level phenomena involve such a kaleidoscopic array of non-additive interactions that the macro-level dynamics cannot be derived from micro-level information except by means of simulations, like those shown above. In a similar fashion, many other characteristic features of living systems can be captured as emergent phenomena in artificial life models; see, e.g., Farmer et al. (1986), Langton (1989b), Langton et al. (1992), Varela and Bourgine (1992), and Brooks and Maes (1994). In every case, supple macro-level dynamics emerge from, and are explained by, an explicit micro-level dynamics in which a parallel, distributed network of communicating agents make decisions about how to behave in their local environment based on selective information from their local environment. This growing empirical evidence continually reinforces the conclusion that the models' emergent architecture is responsible for the supple dynamics. An open field of empirical investigation in artificial life is to pin down more precisely exactly which features of emergent models are responsible for which aspects of supple emergent dynamics.
4. Emergent Models of the Mind's Supple Dynamics
The readily observable regularities and patterns in our mental lives have been termed "folk psychology" (e.g., Churchland, 1981). It has long been known that the global regularities of folk psychology must be qualified by "ceteris paribus" clauses. Consider two typical principles, adapted from Horgan and Tiensen (1989), which, even though they are extremely simplified, can illustrate this phenomenon:
Means-ends reasoning: If X wants goal G and X believes that X can get G by performing action A, then ceteris paribus X will do A. For example, if X wants a beer and believes that there is one in the kitchen, then X will go get one — unless, as the "ceteris paribus" clause signals, X does not want to miss any of the conversation, or X does not want to offend the speaker by leaving in midsentence, or X does not want to drink beer in front of his mother-in-law, or X thinks he should, instead, flee the house since it is on fire, etc.Belief extension (modus ponens): If X believes P and X believes P entails Q, then ceteris paribus X will come to believe Q. But people sometimes fail to infer what is implied by their antecedent beliefs, for a variety of reasons. Lack of attention or illogic is sometimes at work. But some exceptions to the psychological principle of modus ponens reflect attentive logical acumen at its best. For example, if X has antecedent reason to doubt that Q is true, X might conclude that it is more reasonable to question P or to question that P entails Q.
The "ceteris paribus" clauses signal that these patterns in our mental lives have exceptions. This open-ended range of exceptions is ubiquitous in the patterns of our mind. Indefinitely many more examples like those two above can be generated. Further, this open-ended texture seems ineliminable; it apparently cannot be captured by any long but finite list of exceptions.
The exceptions to the principles of folk psychology come from different sources. Some signal malfunctions in the underlying material and processes that implement the mental processes (e.g., Fodor, 1981). Others result from the indeterminate results of competition among a potentially open-ended range of conflicting desires (e.g., Horgan and Tienson, 1989, 1990). But certain exceptions reflect our ability to act appropriately in the face of an open-ended range of contextual contingencies. These are exceptions that prove the rule. The ability to figure out how to act appropriately in context is an important part of the power of our mind; it is the very essence of intelligence (Beer, 1990; Varela, Thompson, and Rosch, 1991; Parisi, Nolfi, and Cecconi, 1992; Cliff, Harvey, and Husbands, 1993; Steels, 1994). Since life can call for us to cope with an open-ended range of novel challenges, it should be no surprise if the dynamical patterns of mind resist precise and exceptionless formulation. Our mental dynamics, thus, exhibits a form of suppleness quite like what we observed in flocking and evolution.
Since the suppleness of mental dynamics is crucially involved in the very intelligence of mental capacities, any adequate account of the mind must include an account of its suppleness. A good account of the suppleness of a mental capacity must be precise, accurate (no false positives), complete (no false negatives), principled, and feasible. The virtues of precision, accuracy, and completeness are obvious enough. A principled account would indicate what unifies the various instances of the supple capacity. And feasibility is important so that we can test empirically whether the account is accurate and complete.
Although quite familiar, the suppleness of mental dynamics is difficult to describe and explain. The familiar formulations of the principles of mind employ "ceteris paribus" clauses, as in the two illustrations above (or they use equally vague clauses like "as appropriate"). But such vaguely formulated principles give no indication of when ceteris is not paribus or when deviation from the norm is appropriate. Since these vague principles obscure both which contexts trigger exceptions and what form the exceptions take, they are ineliminably imprecise, and thus they cannot be accurate, complete, principled, or feasible.
An alternative strategy for accounting for the suppleness of mental processes is, in effect, to predigest the circumstances that give rise to exceptions and then specify (either explicitly or through heuristics) how to cope with them in a manner that is precise enough to be expressed as an algorithm. In this spirit, so-called "expert systems" precisely encode the details of principles and their exceptions in a knowledge base generated through consultation with the relevant experts. This strategy yields models of mental capacities that are explicit and precise enough to be implemented as a computer model, so the strategy has the virtue of feasibility; the dynamic behavior of the model can be directly observed and tested for plausibility. The problem is that, although these expert systems sometimes work well in precisely circumscribed domains, they have systematically failed to produce the kind of supple behavior that is characteristic of intelligent response to an open-ended variety of circumstances. Their behavior is brittle; it lacks the context-sensitivity that is distinctive of intelligence. And this problem is not merely a limitation of present implementations; attempts to improve matters by amplifying the knowledge base only generate combinatorial explosion. The nature and central role of suppleness in our mental capacities helps explain why the so-called "frame problem" of artificial intelligence is so important and so difficult to solve. (See, e.g., Dreyfus, 1989; Hofstadter, 1985; Holland, 1986; Langton, 1989a; Horgan and Tienson, 1989, 1990; Chalmers, French, and Hofstadter, 1992.) Although precise and feasible and perhaps principled, the expert-systems accounts of supple mental dynamics have always proved to be inaccurate and incomplete.
A third strategy for accounting for supple mental dynamics is to devise emergent models analogous to the emergent artificial life models of flocking and evolution. After all, one of the hallmarks of emergent artificial life models is their strikingly good accounts of the supple dynamics found throughout living systems. An emergent model of the mind would construe supple mental dynamics as the emergent macro-level effect of an explicit local dynamic in a population of micro-level entities. The members of the micro-level population would in some way compete for influence in a context-dependent manner, and thus would create some sort of adaptive macro-level dynamic. If all went well, this macro-level dynamic would correspond well with the familiar supple dynamics of mental life.
These remarks give no detailed account of the sort of emergent model that I have in mind, of course. Let there be no false advertising! My remarks at best just begin to suggest what an emergent model of supple mental dynamics would be like. Emergent models have some similarity with some existing models, such as those of Hofstadter and his students (Hofstadter, 1985; Mitchell, 1993; French, 1995), classifier systems (Holland, 1986), and connectionist (neural network, parallel distributed processing) models (Rumelhart and McClelland, 1986; Anderson and Rosenfeld, 1988). Delineating the relevant similarities and differences must be left for another occasion. Still, briefly contrasting emergent models with the widely-known connectionist models can highlight what I consider to be the important features of emergent models.
Emergent models of mental phenomena and connectionist models have some striking similarities. First, both tend to produce fluid macro-level dynamics as the implicit emergent effect of micro-level architecture. In addition, both employ the architecture of a parallel population of autonomous agents following simple local rules. For one thing, the agents in an emergent model bear some analogy to the units in a connectionist net. Furthermore, the agents in many artificial life models are themselves controlled by internal connectionist nets (e.g., Todd and Miller, 1991; Ackley and Littman, 1992; Belew, McInerney, and Schraudolph, 1992; Cliff, Harvey, and Husbands, 1993; Parisi, Nolfi, and Cecconi, 1992; Werner and Dyer, 1992). In addition, for decades connectionism has explored recurrent architectures and unsupervised adaptive learning algorithms, both of which are echoed in a general manner in much artificial life modeling.
But there are important differences between typical artificial life models and many of the connectionist models that have attracted the most attention, such as feed-forward networks which learn by the back-propagation algorithm. First, the micro-level architecture of artificial life models is much more general, not necessarily involving multiple layers of nodes with weighted connections adjusted by learning algorithms. Second, emergent models employ forms of learning and adaptation that are more general than supervised learning algorithms like backpropagation. This allows artificial life models to side-step certain common criticisms of connectionism, such as the unnaturalness of the distinction between training and application phases and the unnatural appeal to an omniscient teacher. Third, typical connectionist models passively receive prepackaged sensory information produced by a human designer. In addition, they typically produce output representations that have meaning only when properly interpreted by the human designer. The sort of emergent models characteristic of artificial life, by contrast, remove the human from the sensorimotor loop. A micro-level agent's sensory input comes directly from the environment in which the agent lives, the agent's output causes actions in that same environment, and those actions have an intrinsic meaning for the agent (e.g., its bearing on the agent's survival) in the context of its life. Through their actions, the agents play an active role in controlling their own sensory input and reconstructing the own environment (Bedau, 1994, in press). Finally, the concern in the bulk of existing connectionist modeling is with equilibrium behavior that settles onto stable attractors. By contrast, partly because the micro-level entities are typically always reconstructing the environment to which they are adapting, the behavior of the emergent models I have in mind would be characterized by a continual, open-ended evolutionary dynamic that never settles onto an attractor in any interesting sense.
Neuroscientists sometimes claim that macro-level mental phenomena cannot be understood without seeing them as emerging from micro-level activity. Churchland and Sejnowski (1992), for example, argue that the brain's complexity forces us to study macro-level mental phenomena by means of manipulating micro-level brain activity. But on this picture manipulating the mind's underlying micro-level activity is merely a temporary practical expedient, a means for coming to grasp the mind's macro-level dynamics. Once the micro-level tool has illuminated the macro-level patterns, it has outlived its usefulness and can be abandoned. No permanent, intrinsic connection binds our understanding of micro and macro. By contrast, my point of view is that the mind's macro-level dynamics can be adequately described or explained only by making essential reference to the micro-level activity from which it emerges. The microdynamical model in a sense is a complete and compact description and explanation of the macro-level dynamics. And since these global patterns are supple, they inevitably have exceptions, and those exceptions (some of them) prove the rule in the sense that they reveal the global pattern's central and underlying nature. Thus, to get a precise and detailed description of the macro-level patterns, there is no alternative to simulating the microdynamical model. In this way, the microdynamical model is ineliminably bound to our understanding of the emergent macro-level dynamics.
Since the ultimate plausibility of the emergent approach to mental dynamics depends, as one might say, on "putting your model where your mouth is", one might in fairness demand proponents of this approach to start building models. But producing models is not easy; a host of difficult issues must be faced. First, what is the micro-level population from which mental phenomena emerge, and what explicit micro-level dynamics govern it? Second, assuming we have settled on a micro-level population and dynamics, how can we identify the macro-level dynamics of interest? Emergent models generate copious quantities of micro-level information, and this saddles us with a formidable data-reduction problem. Where should we make a thin slice in these data? Finally, assuming we have a satisfactory solution to the data reduction problem, how can we recognize and interpret any patterns that might appear? We must distinguish real patterns from mere artifacts. The patterns will not come pre-labelled; how are we to recognize any supple mental dynamics that might emerge?
The foregoing difficulties are worth confronting, though, for an emergent account of supple mental dynamics would have important virtues. The account would be precise, just as precise as the emergent model itself. Furthermore, the model would produce an increasingly complete description of a precise set of mental dynamics as it was simulated more and more. Though the model might never completely fill out all the details of the supple pattern, additional simulation could generate as much detail as desired. In addition, the model's account of the supple dynamics would be principled, since one and the same model would generate the supple pattern along with all the exceptions that prove the rule. As for accuracy, this could be discerned only through extensive empirical study (simulation). Still, the evident accuracy of the emergent models of supple flocking and supple evolution can give us some confidence that emergent models of supple mental dynamics are a promising avenue to explore.
5. Emergent Functionalism about the Mind
The mind's supple dynamics has implications for some current philosophical controversies. I will focus in particular on some implications for contemporary functionalism in the philosophy of mind (Putnam, 1975; Fodor, 1981). Much contemporary debate over functionalism focuses on a certain collection of problems, such as whether functionalism can account for the consciousness of mental beings and the intentionality of their mental states, but I will not engage those debates here; see Block (1980) and Lycan (1990) for a representative range of reading about functionalism. My concern is only with the consequences for functionalism of the suppleness of mental dynamics.
Contemporary philosophical functionalism must be sharply distinguished from the traditional functionalism in psychology advocated by James and Dewey in the 1890s, which served as a precursor of the behaviorism of Watson in the 1920s. Contemporary functionalism grew up in the 1970s as a result of the problems with its two predecessors: behaviorism and the mind-brain identity theory. The lesson from the critics of behaviorism was that mental systems have internal mental states. In particular, a mental system's action in a given environment is affected by a complex web of interactions among its internal (mental) states and its sensory stimuli. The lesson from the critics of the mind-brain identity theory was that a mental system's internal states can be instantiated in an open-ended number of different kinds of physical states. No (first-order) physical similarity unifies all the possible physical instances of a given mental state. To meet these objections, early functionalists proposed that we view the mind as analogous with software. An indefinite number of different states in different kinds of hardware could realizing a given software state; by analogy, an indefinite range of physical devices or processes could embody a mind. Functionalism's slogan, then, could be "mind as software". Just as computation theory studies classes of automata independently of their hardware implementation, functionalism's guiding idea is that we can study the dynamics of mental states in abstraction from their implementation in the brain.
Functionalists view mental beings as a certain kind of input-output device and hold that having a mind is no more and no less than having a set of internal states that causally interact (or function) with respect to each other and with respect to environmental inputs and behavioral outputs in a certain characteristic way; a mental system is any system whatsoever that is governed by a set of internal states that exhibit a dynamical behavior that is functionally isomorphic with the dynamic characteristic of human mental states. If the mental system is a physical entity, its characteristic internal states will always be realized in some kind of physical entity, but it makes no difference what kind of physical entity instantiates the functionally-defined dynamical patterns. Human mental states happen to be embodied in patterns of neuronal activity, but if exactly the same dynamical patterns were found in a system composed of quite different materials — such as silicon circuitry — then, according to functionalism, that system would literally have a mind. So, functionalism's central tenet is that mind is defined by form rather than matter; to have a mind is to embody a distinctive dynamical pattern, not to be composed out of a distinctive sort of substance.
As we saw in the preceding section, the dynamical patterns of the mind are characteristically supple. If functionalism is correct, certain supple patterns define what it is to have a mind. But how can functionalism specify which mental patterns define the mind? One approach to answering this question would simply employ our common-sense understanding of characteristic mental dynamics (e.g, Lewis, 1972). On this common-sense approach to functionalism, the mind is defined by a set of patterns that themselves are characterized with "ceteris paribus" clauses (or their equivalent). The inherent imprecision of these "ceteris paribus" clauses, then, carries over to common-sense functionalism itself. In other words, common-sense functionalism asserts something likes this: "The pattern of states definitive of minds has roughly such-and-such form, but there are an indefinite number of exceptions to this pattern and we must be content to remain ignorant about when and why exceptions happen and what form they can take." Even if true, this is a disappointingly imprecise assertion about what minds are.
Cognitive science and artificial intelligence do provide precise accounts of the mind. By delineated the exceptional cases precisely enough (perhaps through use of heuristics), the dynamical patterns somewhat like those in the mind can be directly represented in operating software. This strategy interprets the functionalist slogan "mind as software" literally, and attempts to express precisely the supple dynamics of mind directly in an algorithm. If such an algorithm exists, then, according to the central thesis of functionalism, any implementation of the algorithm would literally have a mind. But, as we noted above, there has been a persistent pattern of failed attempts in artificial intelligence to capture the supple adaptability characteristic of mental dynamics. This history suggests the lesson that we cannot directly represent the supple patterns characteristic of the mind as algorithms — mind is not software — and thus artificial-intelligence functionalism is unsound.
We should not conclude, however, that functionalism has no precise and true formulation. I suggested in the previous section that an emergent model can account for supple mental dynamics. If so, then this emergent model can provide an indirect, emergent description of those dynamical patterns that functionalism uses to define the mind. This emergent functionalism would then inherit the virtues possessed by the emergent model on which it was based. Unlike common-sense functionalism, emergent functionalism can be quite precise about what a mind is. For one thing, emergent functionalism is relative to a specific microdynamical model of supple mental dynamics, and any given microdynamical model is a completely precise object. But more to the point, the microdynamical model is an implicit but perfectly precise encapsulation of an exact macro-level dynamics in all its suppleness. Since the model is emergent, the full details of the description would be revealed only through observing simulations of the model, but repeated simulation could make the description as complete as desired. Emergent functionalism could adopt the slogan "mind as supple emergent dynamics".
One might worry that emergent functionalism amounts to nothing more than the trivial functionalist claim that mental phenomena must have some material embodiment. Or one might worry that emergent functionalism contravenes the functionalist's guiding principle, that the mind's definitive features abstract away from implementational details as much as possible. Both these worries are unfounded. On the one hand, emergent functionalism is not architecture independent; the central tenet of the view is that the mind's supple adaptive dynamics essentially requires a certain kind of emergent architecture. Hence, the emergent functionalist view is no mere reiteration of the mind's material embodiment. On the other hand, emergent functionalism admits the multiple realizability of the population of micro-level processes that underlie mental dynamics, and thereby admits the multiple realizability of the macro-level mental dynamics that emerge from them. So emergent functionalism views the mind maximally abstractly, as the functionalist characteristically does. But emergent functionalism is careful not to abstract away the crucial emergent architecture that apparently accounts for the mind's supple dynamics.
Acknowledgements
For valuable discussion, many thanks to Colin Allen, Hugo Bedau, Kate Elgin, Bob French, Mark Hinchliff, Cliff Hooker, Terry Horgan, Melanie Mitchell, Norman Packard, David Reeve, Dan Reisberg, Carol Voeller, two anonymous reviewers, and audiences at Dartmouth College, Oregon State University, Portland State University, Pomona College, University of British Columbia, University of California at Los Angeles, University of California at San Diego, University of Illinois at Urbana-Champaign, University of Newcastle, University of New Hampshire, and Tufts University. Thanks also to my fellow panelists at the Workshop on Artificial Life, "A Bridge Towards a New Artificial Intelligence", at the University of the Basque Country, where some of these ideas were discussed. For grant support that helped make this work possible, thanks to the Oregon Center for the Humanities and the Oregon Humanities Council.
References
Ackley, D., and Littman, M. 1992. Interactions between evolution and learning. In C. Langton, C. Taylor, D. Farmer, and S. Rasmussen (Eds.), Artificial life II. Reading, MA: Addison-Wesley.
Anderson, J. A., and Rosenfeld, E. (Eds.). 1988. Neurocomputing: Foundations of research. Cambridge, MA: Bradford Books/MIT Press.
Bedau, M. A. 1992. Philosophical aspects of artificial life. In F. Varela and P. Bourgine (Eds.), Towards a practice of autonomous systems. Cambridge, MA: Bradford Books/MIT Press.
Bedau, M. A. 1994. The evolution of sensorimotor functionality. In P. Gaussier and J. -D. Nicoud (Eds.), From perception to action. Los Alamitos, CA: IEEE Computer Society Press.
Bedau, M. A. 1995. Three illustrations of artificial life's working hypothesis. In W. Banzhaf and F. Eeckman (Eds.), Evolution and biocomputation — Computational models of evolution. Berlin: Springer.
Bedau, M. A. 1996a. Weak emergence. In J. Tomberlin (Ed.), Philosophical perspectives: Metaphysics, Vols. 10 and 11. Altascadero, CA: Ridgeview.
Bedau, M. A. 1996b. The nature of life. In M. Boden (Ed.), The Philosophy of Artificial Life. New York: Oxford University Press.
Bedau, M. A. In press. The extent to which organisms construct their environments. Adaptive Behavior.
Bedau, M. A. and Bahm, A. 1994. Bifurcation structure in diversity dynamics. In R. Brooks and P. Maes (Eds.), Artificial life IV. Cambridge, MA: Bradford Books/MIT Press.
Bedau, M. A., Giger, M., and Zwick, M. 1995. Adaptive diversity dynamics in static resource models. Advances in Systems Science and Applications, 1, 1-6.
Bedau, M. A., and Packard, N. 1992. Measurement of evolutionary activity, teleology, and life. In C. Langton, C. Taylor, D. Farmer, and S. Rasmussen (Eds.), Artificial life II. Reading, MA: Addison-Wesley.
Bedau, M. A., Ronneburg, F., and Zwick, M. 1992. Dynamics of diversity in a simple model of evolution. In R. Männer and B. Manderik (Eds.), Parallel problem solving from nature 2. Amsterdam: Elsevier.
Bedau, M. A. and Seymour, R. 1994. Adaptation of mutation rates in a simple model of evolution. In R. Stonier and X. H. Yu (Eds.), Complex systems — Mechanisms of adaptation. Amsterdam: IOS Press.
Beer, R. D. 1990. Intelligence as adaptive behavior: An experiment in computational neuroethology. Boston: Academic Press.
Belew, R. K., McInerney, J., and Schraudolph, N. N. 1992. Evolving networks: Using the genetic algorithm with connectionist learning. In C. Langton, C. Taylor, D. Farmer, and S. Rasmussen (Eds.), Artificial life II. Reading, MA: Addison-Wesley.
Block, N. (Ed.). 1980. Readings in philosophy of psychology, Vol. 1. Cambridge, MA: Harvard University Press.
Brooks, R. and Maes, P. (Eds.). 1994. Artificial life IV. Cambridge, MA: Bradford Books/MIT Press.
Cairns-Smith, A. G. 1985. Seven clues to the origin of life. Cambridge, England: Cambridge University Press.
Chalmers, D. J., French, R. M., and Hofstadter, D. R. 1992. High-level perception, representation, and analogy. Journal of Experimental and Theoretical Artificial Intelligence, 4, 185-211.
Churchland, P. M. 1981. Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78, 67-90.
Churchland, P. S. and Sejnowski, T. J. 1992. The computational brain. Cambridge, MA: Bradford Books/MIT Press.
Cliff, D., Harvey, I., and Husbands, P. 1993. Explorations in evolutionary robotics. Adaptive Behavior, 2, 73-110.
Dreyfus, H. 1979. What computers cannot do (2nd ed.). New York: Harper and Row.
Farmer, J. D., Lapedes, A., Packard, N., and Wendroff, B. (Eds.). 1986. Evolution, games, and learning: Models for adaptation for machines and nature. Amsterdam: North Holland.
Fodor, J. A. 1981. Special sciences. In his Representations. Cambridge, MA: Bradford Books/MIT Press.
French, R. M. 1995. The subtlety of sameness : a theory and computer model of analogy-making. Cambridge, MA: Bradford Books/MIT Press
Hofstadter, D. R. 1985. Waking up from the boolean dream, or, subcognition as computation. In his Metamagical themas: Questing for the essence of mind and pattern. New York: Basic Books.
Holland, J. H. 1986. Escaping brittleness: The possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In R. S. Michalski, J. G. Carbonell, and T. M. Mitchell (Eds.), Machine learning II. Los Altos, CA: Morgan Kaufmann.
Horgan, T. and Tienson, J. 1989. Representation without rules. Philosophical topics, 17, 147-174.
Horgan, T. and Tienson, J. 1990. Soft laws. Midwest studies in philosophy, 15, 256-279.
Langton, C. 1989a. Artificial life. In C. Langton (Ed.), Artificial life. Reading, MA: Addison-Wesley.
Langton, C. (Ed.). 1989b. Artificial life. Reading, MA: Addison-Wesley.
Langton, C., Taylor, C. E., Farmer, J. D., Rasmussen, S. (Eds.). 1992. Artificial Life II. Reading, MA: Addison-Wesley.
Lewis, D. 1972. Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50, 249-258.
Lycan, W. G. (Ed.) 1990. Mind and cognition: A reader. Cambridge, MA: Basil Blackwell.
Maynard Smith, J. 1975. The theory of evolution (3rd ed). New York: Penguin.
Mitchell, M. 1993. Analogy-making as perception. Cambridge, MA: Bradford Books/MIT Press.
Parisi, D., Nolfi, N., and Cecconi, F. 1992. Learning, behavior, and evolution. In F. Varela and P. Bourgine (Eds.), Towards a practice of autonomous systems. Cambridge, MA: Bradford Books/MIT Press.
Putnam, H. 1975. The nature of mental states. In his Mind, language, and reality. Cambridge, England: Cambridge University Press.
Reynolds, C. W. 1987. Flocks, herds, and schools: A distributed behavioral model. Computer Graphics, 21, 25-34.
Reynolds, C. W. 1992. Boids demos. In C. Langton (Ed.), Artificial life II video proceedings. Reading, MA: Addison-Wesley.
Rumelhart, D. E., and McClelland, J. L. 1986. Parallel distributed processing: Explorations in the microstructure of cognition, 2 Vols. Cambridge, MA: Bradford Books/MIT Press.
Steels, L. 1994. The artificial life roots of artificial intelligence. Artificial life, 1, 75-110.
Todd, P. M., and Miller, G. F. 1991. Exploring adaptive agency II: Simulating the evolution of associative learning. In J. -A. Meyer and S. W. Wilson (Eds.), From animals to animats: Proceedings of the first international conference on the simulation of adaptive behavior. Cambridge, MA: Bradford Books/MIT Press.
Varela, F., and Bourgine, P. (Eds.). 1992. Towards a practice of autonomous systems. Cambridge, MA: Bradford Books/MIT Press.
Varela, F. J., Thompson, E., and Rosch, E. 1991. The embodied mind. Cambridge, MA: Bradford Books/MIT Press.
Werner, G. M., and Dyer, M. G. 1992. Evolution of communication in artificial organisms. In C. Langton, C. Taylor, D. Farmer, and S. Rasmussen (Eds.), Artificial life II. Reading, MA: Addison-Wesley.
Figure captions
Figure 1 caption:
Evolutionary dynamics in mutation rate distributions from four simulations of Packard's model of sensorimotor agents. Time is on the X-axis (100,000 timesteps) and mutation rate is on the Y-axis. The gray-scale at a given point (t,m) in this distribution shows the frequency of the mutation rate m in the population at time t. See text.
Figure 2 caption:
Time series data from a simulation of Packard's model of sensorimotor agents, showing how the population's resource gathering efficiency increases when the mutation rates evolve downward far enough to change the qualitative character of the population's genetic diversity. From top to bottom, the data are: (i) the mutation rate distribution; (ii) a blow up of very small mutation rates; (iii) the mean mutation rate (note the log scale); (iv) the uningested resource in the environment; (v) three aspects of the diversity of the sensorimotor strategies in the population; (vi) the population level. See text.
Figure 3 caption:
Time series data from a simulation of Packard's model of sensorimotor agents. From top to bottom, the data are: (i) a blow up of very small mutation rates in the mutation rate distribution; (ii) mean mutation rate (note the log scale); (iii) the level of uningested resources in the world; (iv) population level. At timestep 333,333 all sensorimotor genes of all living organisms were randomly scrambled. See text.