The Essence of Mind and How to Program It
Excerpts from an unpublished manuscript by Ben Goertzel  with Ted Goertzel.  All rights reserved.

This essay reviews some scientific and metaphysical ideas that help us to understand the nature of the mind and how it might be simulated on a computer. It is a personal essay, written in the voice of Ben Goertzel, describing the ideas that shaped his thought on artificial intelligence. 

19íth C Philosophy, 21íst C Technology

If you take a naïve point of view on Artificial Intelligence, the relevance of pre-20íth-century philosophy might seem confusing. After all, weíre talking about hi-tech here, and these philosophers didn't even have electric lights or calculators. But mind is mind, and mind was still mind 200 years ago. The great philosophers were people who tried to tackle the most fundamental and difficult questions in the world, armed only with their own mental powers. Really, they had a lot in common with the best sci-fi writers: a willingness to speculate based on pure reason. Their grandiose ambitions sometimes led them off track. But at least they were trying to understand the mind and the world as a whole, instead of just one tiny little bit. To build a thinking machine, one had to understand both the whole and the parts, and build a bridge between the two.

I read and studied a lot of philosophy, Western and Eastern, modern and ancient and inbetween, and so on and so forth. But the two thinkers who influenced me most were the German philosopher Friedrich Nietzsche, and the Charles Sanders Peirce (pronounced "purse"). Ordinarily these two arenít mentioned in the same breath -- by those few who would ever mention either of them, I mean Ö on the surface they donít have much in common. But once you read them very deeply and get below the surface you see that they were basically getting at the same thing. And throughout the 80ís and early 90ís as I read them more and more, I became more and more clear that this thing they were getting at was the same as the "basic picture of mind" that I had in the back of my head but couldnít articulate.

Iíll tell you a bit about these philosophers, and then Iíll move on to explain my own philosophy of mind and how it relates to what these guys said. All this may seem very abstract and disconnected from anything practical, but really, thereís no way to tackle an engineering problem as big as the mind without having a good picture of the whole thing as well as a detailed understanding of the parts. What philosophy of mind is all about is getting a good picture of the whole thing.

Modern philosophy is sort of split into two pieces: thereís analytical philosophy, which is British in origin and is focused on formal logic and on the clarification of word meanings. Everything is analyzed to death until itís completely clear Ė at least thatís the idea. Then thereís continental philosophy, which is French oriented and is completely different: rational analysis is not the point, rather one wants to explore the interconnections between things in an almost Zen sort of way. Both of these kinds of philosophy are sort of interesting Ė I guess I prefer the French kind, which I like to read in about the same way I read really weird literature like Finneganís Wake or William Burroughs, for entertainment and general mental stimulation rather than to get a concrete understanding of anything. But basically, I prefer philosophy the way it was done one or two hundred years ago, when there was more of a mixture of literary exploration of interconnections and the analytical quest to understand. It was a kind of egomania; these 18íth and 19íth century thinkers thought they could write a huge philosophical treatise and explain the whole universe. But this is a kind of egomania that I share, and so Iíve learned a lot from these guys, more so than from more recent philosophy.

The One Law of Mind

Peirce was an eccentric scientist who worked his whole adult life for the Coast Guard Geologica Survey, and spent all his spare time copiously scribbling notebooks full of philosophical and mathematical speculations. He never wrote a book, but just left us with a scattered corpus of published papers and rough draft essays. He was friends with William James, who modified some of his ideas and popularized them. James took Peirceís idea of "pragmatism" Ė the idea that the reality of an object is the set of its observable properties -- and changed it around so much that Peirce renamed his own theory "pragmaticism."

Peirce did a lot of work in mathematical logic, including the foundations of quantifiers and relational logic and lots of other useful stuff. He also laid the foundation for semiotics, the study of symbolism. But the aspect of his work that interested me the most was his theory of the mind, which came out of his theory of the "fundamental categories of the world" (philosophers from that century loved to create fundamental categories for the world).

At the time Peirce was writing, toward the end of the 1800ís, determinism was very popular Ė most scientists believed the universe was one big machine. But Peirce didnít buy it Ė he saw the weaknesses of simplistic determinism way before nearly anyone else did. Peirce's metaphysical thinking helped him to anticipate scientific findings decades before the scientists discovered them. He foresaw important aspects of chaos theory, quantum theory, cosmology, artificial intelligence and modern brain science, just by following the intuition suggested by his metaphysics.

In 1887, he wrote that:

It is sufficient to go out into the air and open one's eyes to see that the world is not governed altogether by mechanism..the endless variety in the world has not been created by law...when we gaze upon the multifariousness of nature we are looking straight into the face of a living spontaneity. In 1891, nine years before the German physicist Max Planck first introduced the idea of the quantum, Peirce wrote: There is serious room for doubt whether the fundamental laws of mechanics hold good for single atoms, and it seems quite likely that they are capable of motion in more than three dimensions. And in 1898, he wrote: If I make atoms swerve - - as I do - - I make them swerve but very very little, because I conceive they are not absolutely dead. The random swervings of sub-microscopic particles -- way back in 1898! Today, quantum physics has proved Peirce right with rigorous and reproducible experiments. All atoms do swerve a little; and this swerving is, at least in the view of some distinguished scientists, intimately connected with consciousness. In more modern language, what quantum physics tells us is that an event does not become definite until someone observes it. An unobserved quantum system remains in an uncertain state, a superposition of many different possibilities. Observation causes "collapse" into a definite condition, which is chosen at random from among the possibilities provided. This is how some theorists have been able to associate consciousness with quantum measurement.

Just as Peirce anticipated quantum theory, he also anticipated parts of modern brain science. He observed that

the disturbance of feeling, or sense of reaction, accompanies the transmission of disturbance between nerve-cells, or from a nerve-cell to a muscle cell, or the external stimulation of a nerve-cell. General conceptions arise upon the formation of habits in the nerve-matter...The brain shows no central cell. The unity of consciousness is therefore not of physiological origin. It can only be metaphysical. So far as feelings have any continuity, it is the metaphysical nature of feeling to have a unity. Today we know quite clearly that "the brain shows no central cell," there is no one locus of conscious experience. Twenty years ago, however, most neuroscientists would not have agreed with this statement! Here, again, Peirce's metaphysics led him to a scientific hypothesis far in advance of the science of his time.

But how did Peirce arrive at these conclusions way back in the 1890's, before any of these scientific findings were available? Before anyone else was writing about these things? Before chaos theory made spontaneity and unpredictability household words? By assuming that the universe worked the same way his brain did! He believed that:

"Every attempt to understand anything - every research - supposes, or at least hopes, that the very objects of study themselves are subject to a logic more or less identical with that which we employ." This is far from obvious. It even seems a bit self-centered. Why should the physical universe be governed by the same principles that govern human thinking? Maybe because, as John Lennon said, "nothing is real" Ė the worldís just a product of our minds? Or maybe because our brains are part of the physical universe, governed by the same fundamental laws. Others simply say that God made things that way. Or maybe both! (Iíll leave that for you to puzzle on.)

Anyway, whatever his reasons, Peirce strongly believed - and I agree - that there is an underlying order and structure to the universe. Without this belief, it would be difficult to be a scientist. Why spend your life trying to discover order and structure in the universe if you believe there isnít any?

Peirce's approach to philosophy was to introspect - to see how his own mind worked, and to try to formulate his observations as precisely as possible and abstract them into general principles. He used this method to think about the physical world, and also to think about the thought process itself. Once I realized this was what he was doing, I understood his work pretty easily Ė it didnít seem nearly as weird and hard to penetrate as it had at first. I quickly saw that what he saw when he looked into his own mind was a lot like what I saw when I looked into mine. He used a different language, because he was writing at a different time, and had different goals. But he was getting at the essence of the mind in an analytical way, which was exactly what I wanted.

One of the central points of Peirce's philosophy - one of the things he discovered by looking into his mind - was the idea that numbers are of primary importance in the world. Peirce believed that on the most fundamental level, the universe was organized numerically.  An example of this is the fact that our universe is composed of a finite number of elements, as determined by the number of electrons in orbit around their nuclei.  The Periodic Table of the Elements is a key to the structure of the universe because the universe is organized according to integer numbers.  Peirce believed that the small integers - particularly one, two and three Ė had a deep basic meaning Ė they werenít just arbitrary human creations, they were fundamental organizing principles, helping structure the universe.

Hereís how he put it:

Three conceptions are perpetually turning up at every point in every theory of logic, and in the most rounded systems they occur in connection with one another. They are conceptions so very broad and consequently indefinite that they are hard to seize and may be easily overlooked. I call them the conceptions of First, Second, Third. Of course, Peirce wasnít the first guy to obsess on numbers. Pythagoras way back in 500 BC said "all things are numbers," and he probably cribbed the idea from the ancient Chinese, maybe from the I Ching. Many of the great thinkers have had a "favorite number." Petrus Ramus -- a sixteenth century French logician known for his Masters' thesis on the topic "Whatever Aristotle has Said is a Fabrication" -- had a special liking for the number two. Sir Thomas Browne -- a seventeenth century English writer who sought to reconcile science and religion, liked five. Pythagoras and Carl Jung both preferred four. Maybe these facts will help you on Jeopardy someday!! As to Peirce, he was a three fanatic. He said "I am a determined foe or no innocent number; I respect and esteem them all in their several ways; but I am forced to confess a leaning to the number Three in philosophy." When someone says they have a favorite number, theyíre recognizing that numbers are more than tools for counting. Each number has its own flavor, its own identity. Each one can be seen as an archetype, as a model or underlying principle at the root of other things in the universe.

Zero is an interesting number in that people seemed to be afraid of it for a long time. The early Egyptians, Chinese, Cretans, Greek, Hebrews and Romans all had complicated number systems but lacked the number zero, which made their mathematics pretty annoying. Try doing long division in Roman numerals. Itís awkward precisely because there isnít any zero. Zeroís a weird one because metaphysically it implies nothingness - and how can nothing be something? Thereís a certain gustiness in making a special mark to indicate nothing at all. Every time we write down "zero" weíre creating form out of nothingness, weíre playing God in a way!

For Peirce, Zero corresponded to the original state of the universe, or any other system for that matter. He said that the universe originated in "the utter vagueness of completely undetermined and dimensionless potentiality." He thought that "the initial condition, before the universe existed, was not a state of pure abstract being. On the contrary it was a state of just nothing at all, not even a state of emptiness, for even emptiness is something."

This was just metaphysical weirdness when he said it, but now it sounds just like advanced cosmology!! A recent physics textbook says: "according to Einstein's theory, the size of the university would have been zero at some finite time in the past, nearly 15 billion years ago. All matter would have been compressed to a point, and the density and temperature of the universe would have been infinite."

So much for zero. One is where all the fun starts. According to Peirce, "First is the conception of being or existing independent of anything else." Firstness is "feelings, comprising all that is immediately present, such as pain, blue, cheerfulness, the feeling that arises when we contemplate a consistent theory, etc. A feeling is a state of mind having its own living quality, independent of any other state of mind ... an element of consciousness which might conceivably override everything."

We can sense Firstness when we look at a great painting. When I look at Van Gogh's The Starry Night I feel the mysteriousness and gloominess of the night sky. To experience a painting in this way, one has to feel it, not analyze it. Sadly for Van Gogh, of course, not many of the people of his time could do this with his style of painting. Actually, we can sense Firstness all the time Ė itís always right there in front of it Ė the raw feel of the world as it is. Pure experience, without any reaction or analysis, just feeling and being.

After one comes two. Secondness is when two Firstnesses meet. Peirce said that "Second is the conception of being relative to, the conception of reaction with, something else." Secondness is "sensations of reaction, as when a person blindfold suddenly runs against a post, when we make a muscular effort, or when any feeling gives way to a new feeling."

This is the principle of the Machine. The parts of a machine function only in reaction to each other, and they function predictably, "mechanically." With a manual typewriter Ė anyone remember those? -- you push a letter and the key connected to it hits the paper. Every time you push the same key, you get the same result.

Secondness is the movement from one state to another. Chemical reactions are a perfect example. You pour two chemicals (two Firstnesses) into a test tube and get a third. Over the centuries, chemists collected a huge body of knowledge about chemical reactions before they began to elaborate useful theories of how chemicals interact, moving onto the level of Thirdness Ė thirdness being all about relationship. Of course, the alchemists had all kinds of general theories before that, the mind generates abstractions even in the absence of scientific evidence for them.

Computers can be programmed to function on the level of Secondness, but itís not their strong point. A word processing program can type letters consecutively like a typewriter Ė it can react to what youíre doing. But the whole point of a word processor is to go beyond that - to be able to add or delete or move a few words and have the text rearrange itself - or, more powerfully, to type columns of numbers into a spreadsheet and have them recalculate themselves.. This giant leap forward - from typing to word processing and spreadsheets - took place only in the 1970s. In Peircean terms, itís the leap from Secondness to Thirdness - from a system that responds mechanically to one that follows general rules.
-  Historical note - you can get the first IBM version of VISICALC free from Dan Bricklin, it runs on modern pc's.-
Thirdness is relationship, habit, pattern. Itís not tangible or mechanical like Secondness, itís more abstract. Peirce said "Third is the conception of mediation, whereby a first and second are brought into relation." Itís an inevitable product of the human mind.

Probably my favorite Peirce quote of all time has to do with Thirdness:

"When we think, we are conscious that a connection between feelings is determined by a general rule, we are aware of being governed by a habit...the one primary and fundamental law of mental action consists in a tendency to generalization. Feeling tends to spread; connections between feelings awaken feelings; neighboring feelings become assimilated; ideas are apt to reproduce themselves.
Ö
Logical analysis applied to mental phenomena shows that there is but one law of mind, namely, that ideas tend to spread continuously and to affect certain others which stand to them in a peculiar relation of affectability. In this spreading they lose intensity, and especially the power of affecting others, but gain generality and become welded with other ideas.

This comes up a lot in discussions of Webmind, the Artificial Intelligence Program I am designing based on Peirce's philosophy of mind.  I call it "Peirceís One Law of Mind." Feeling tends to spread; each feeling/idea spreads itself to the other feelings/ideas it relates to. This is basically what happens in neural nets: each neuron spreads electricity to the other neurons it relates to. But neurons are biology; Peirce was talking on the level of mind. He didnít know about neurons Ė no one did, at that time. Thinking only about mind, he came up with an idea a lot like the ideas AI researchers are playing with now, but with a brain modeling inspiration. Of course this isnít a coincidence: evolution had to keep tinkering and twiddling till it came up with a biological system that was mind-like enough to imitate some of the structure and dynamics of the mind. The brain fit the bill. The main reason the brain can give rise to intelligence is that Ė unlike the liver or the spleen or the bone marrow, say -- it emulates Peirceís One Law of Mind.

Whenever we recognize a relation between two things, thatís Thirdness. We may say that one thing is similar to another Ė we may say that two things are of like kind Ė we may say that one idea is a special case of another Ė and so forth. Each of these relations is a Third Ė there are always three things: the two things being related, and the relation itself.

In Webmind we have some basic Thirdnesses which we call links. We have inheritance links, representing that one node is a special case of another; similarity links, representing that one node is similar to another, and so forth.

Three was Peirceís favorite number and he tended to stop there in his numerico-philosophical investigations. He never talked about Fourthness. He had a detailed mathematical argument for this decision not to go beyond three, which was sound in some ways and unsound in others, but I wonít go into that here. I donít agree with him that three is the biggest number that has archetypal value.

Carl Jung wrote a lot about four. He felt that four was the minimal number for representing a unified system: a collection of overlapping, synergetic relationships. If you think about it this way, fourthness is a pattern that emerges from a web of relationships that support and sustain each other so that the whole is greater than the sum of the parts. This happens in the brain and also in complex AI systems like Webmind, and in complex systems like ecosystems, stars, and so forth. Buckminster Fuller was another philosopher who liked four. He thought about fourthness as a tetrahedron -- a pyramid with four triangular sides, including the base. But letís not get too far off topicÖ.

Anyway -- just as physical reality spans all the categories, so too does the mind. Each category lets you see a different aspect of the mind. First lets you see the mind as consciousness, as a feeling, experiencing being. Second lets you see the mind as a machine, as a behaviorist entity that reacts to the world around it. Third lets you see the mind as a network of relationships, patterns in the world and itself. Fourth lets you see the mind as a synergetic whole, a network of relationships that studies itself as a whole and possesses a kind of indecomposable living integrity.

In modern terminology Peirce's "law of mind" might be rephrased as follows: "The mind is an associative memory network, and its dynamic dictates that each idea stored in the memory is an active actor, continually acting on those other ideas with which the memory associates it." I think this almost gets you there. What it misses is the big picture, the emergence and synergy thatís essential to the mind Ė the fourthness. Weíll see that the Webmind design takes Peirceís "mind as thirdness" insight and makes it mathematical and then augments it with a focus on fourthness and synergy. In Webmind, ideas begin as First, with Nodes of their own. They interact with each other, which is Second, producing patterns of relationships, Third. In time, stable, self-sustaining ideas develop, which are Fourth. In Peirce's time, it was metaphysics, today itís computer science. Cool, huh?

Chaos and Complexity

At the same time as I was reading all this philosophy, I was also reading a lot of books from the Q section of the library Ė cybernetics, systems theory, complexity science. A motley crew of scientists from various disciplines seemed to be coming up with a view of the world that wasnít all that different from Peirce and Nietzscheís. At least, if you looked at it all from a sufficiently abstract perspectiveÖ.

Chemistry explains how atoms and molecules combine together to form the structures we see around us. What branch of science explains how huge networks of re-relating relationships give rise to the mind and world? None of the ordinary ones, I rapidly realized. For a while I thought I would have to invent this science myself. Then I noticed that it was taking shape all around me, under a variety of fairly confusing and misleading names, like chaos theory and complexity science and systems theory. Very exciting!

Why did it take science so long to catch up with Peirce's and Nietzsche's intuition that the mind, brain and universe are a complex network of events and relations? A brief glance through history reveals the reason. The first signs that it was catching up were the cybernetics and General Systems Theory traditions that were active in the interval 1940-1960. These traditions spawned a great number of interesting scientific and conceptual advances, including McCulloch and Pitts' groundbreaking work on neural networks that I told you about in the last chapter, and other stuff thatís lasted, like Gregory Bateson's work on family psychology and cybernetic anthropology, and the foundations of engineering control theory and robotics. But, after a decade or two, General Systems Theory seemed to collapse under the weight of its own ambitions. Like Peirce, the systems theorists offered an understanding of the universe as a whole, but unlike Peirce, they tried to provide scientific precision on the basis of this understanding, and they pretty much failed, except in some special cases. The technological and conceptual and mathematical tools werenít there to deal with these intuitions scientifically. The story is much like the story of neural net AI, and not coincidentally Ė the early work on neural nets was really work on cybernetics and systems theory. AI didnít exist as a field back then.

In the 80ís and 90ís, right under my nose as I was growing up as a scientist, cybernetics and systems theory re-emerged under the name of "chaos and complexity." It came out a little bit better the second time around, due mainly to the presence of huge amounts of computing power.

William Ashby, one of the great systems theorists, wrote a book in 1952 called Design for a Brain. But he had no way to verify whether his design would actually work. The technology to validate or falsify his theory didnít exist. The systems theorists of the forties, fifties and sixties recognized -- in intuitive, Peircean way -- the riches to be found in the study of complex self-organizing systems. But they lacked the tools with which to systematically compare their intuitions to real-world data. We now know quite specifically what it was they lacked: the ability to simulate complex processes numerically, and to represent the results of complex simulations pictorially.

Chaos theory is the study of deterministic, rule-based systems that nevertheless appear to be acting completely unpredictably, even at random. This is a kind of emergence of First out of Third, which was first uncovered by Poincare' at the very end of the 1800's, but was not studied systematically or really understood until the 1970's, when cheap computers made it possible to simulate the changes over time of a wide variety of mathematical systems. And complexity theory is the quantitative study of Fourth - the observation of common patterns that emerge from a variety of different complex systems.

When I was in college, no one outside of a few specialists had heard of these things. Now theyíre discussed in dozens of trade books available in every sizeable bookstore. This change is largely due to the advent of computers, which allow us to simulate complex networks of relationships and actually observe them building up into interesting structures. For chemistry, computers have been an immense boon, allowing us to study more complex structures in terms of their atomic and molecular structure. Modern biology today depends largely on computers, e.g., to map the human genome.   For chaos and complexity, computers have been a necessity. They have allowed us to take self-organization, emergence, and networks of interrelation relations from the world of pure philosophy into the world of science. Peirce would be incredibly thrilled.

The concept of "chaos" seems to have a special resonance with nonscientists Ė it highlights insights that are shared by literature, metaphysics, philosophy and quantitative science. If you look up "chaos" up in the Oxford English Dictionary you find that it has both literary and scientific definitions. As a literary term, it refers to "a gaping void...formless primordial matter...a state of utter confusion and disorder." As a scientific term, chaos means "behavior of a system which is governed by deterministic laws but is so unpredictable as to appear random, owing to its extreme sensitivity to initial conditions." This apparent duality of meaning suggests that we might find a precise, mathematical explanation of the tumult and bedlam of everyday life. That was precisely what Peirce was seeking in his metaphysics, and exactly what I was seeking in my quest for the essence of mind.

The example thatís usually used to illustrate chaos is the weather. We all know that the weather bureau cannot forecast weather precisely. But why is that? Not because of a lack of basic scientific understanding, the physical processes involved are well understood. Under laboratory conditions we can measure and predict variables such as heat, air pressure and humidity quite well, and for a long time meteorologists thought that they could make precise predictions if only they could get enough precise measurements. They collected these measurements and built very complex and sophisticated computer models of weather systems. But after a great deal of experience with these models, meteorologists have concluded that precise prediction of the weather is impossible because a very small differences in initial conditions can lead to tremendous variations in the outcome. They discovered that if they rounded their numbers off at the fifth or sixth decimal point, the results would be very different after a few days. In fact, even when we use numbers which are precise to the 100th decimal point, chaos occurs after a hundred or a few hundred iterations of a model.

The 1998 movie Sliding Doors played with an idea like this -- it tried to show what we might find if we could go back and change just one minor event in our lives. The movie juxtaposes two versions of a young woman's life, switching back and forth from one to the other. In one plot line, she just misses a subway train and takes a taxi home. In the other plot line, which alternates with the first, she catches the train. In one story, she gets home early, catches her boyfriend in bed with another woman, ends the relationship and begins a new life. In the other story, she is delayed by a problem with the subway and gets home after the other woman has left. She remains bogged down in the destructive relationship. This minor difference in what mathematicians call "initial conditions" changed her life. We cannot say it was "random" since in each story there were events which caused her to catch or miss the train. But was close enough to random for practical purposes.

Complexity is a different concept from chaos: it deals not with the unpredictability of apparently predictable systems, but with common patterns across apparently diverse systems. Emergent patterns, that have to do not with a systems' microscopic details, but with the way its parts all work together and form a whole. This is exactly what cybernetics and systems theory were all about, and is exactly what Peircean philosophy was all about as well. When Peirce stated that there is but one law of mind, he wasn't making a statement about microscopic particle dynamics in brains. He was making an abstract, high-level statement about how minds were structured and how they change over time. He knew there might be many different kinds of mind - but all of them, he declared, followed this same general form. Peirce's law of mind is a statement of philosophy, but it is also a statement of "complex system science," and one of the best that we have.

When as a college senior in 1984 I wanted to write my undergraduate thesis on chaotic behavior in weather systems, I couldnít do so because of the difficulty, pre-Internet, of finding relevant research literature. But I studied the old writings of the systems theorists, seeing in them the best approximation to the general mathematical theory of digital mind that I was looking for. And then, in the late 80's and early 90's when chaos and complexity suddenly became popular, I was poised to be part of this renaissance.

It was obvious to me that chaos and complexity were central to the brain/mind. I speculated many brain processes are chaotic, for much the same reasons that the weather is. There are a great many neuronal groups, each of which interacts with the other neuronal groups. A slight change in one can have a major effect on the conclusions a thinker reaches. This is why thinking isnít entirely predictable, why we believe people have "free will." Human thoughts canít be precisely predicted.

But then, this doesnít mean human thoughts are completely unpredictable. There are patterns that occur again and again, always with minor variations. This leads us to what chaos theorists call attractors Ė an attractor is a pattern that emerges over and over again, in computer simulations or empirical observations of a certain system. Or, to put it a little differently, an attractor is a behavior of a system that gives the impression of having "magnetic power" over other behaviors of the system. Once the system is following the attracting behavior, itís highly likely to keep on following that behavior until the end of time. And if the system is following some other behavior thatís reasonably similar to the attracting behavior, it is highly likely to eventually wind up following the attracting behavior, or something very, very close to it.

Chaos theory focuses on the chaotic-ness of certain attractors: the fact that once the system is in a certain attractor, it's practically impossible to predict its path through the attractor. Complexity science focuses on the fact that in many cases very different systems have strikingly similar attractors.

The concept of attractor has revolutionized science, and for good reason: it is not merely a technical concept, it has deep philosophical relevance. In metaphysical terms, we may say it is a reconciliation of Heraclitus and Plato. Heraclitus said everything is change; Plato viewed everything in terms of abstract, ideal structures. Both of these approaches are useful, yet they seem contradictory. The concept of attractors helps us to resolve the apparent contradiction. Attractors are abstract structures of ideas that emerge from change.

Thinkers such as Peirce and Jung had struggled with the fact that orderly patterns can be seen to emerge from chaos. Peirce thought this orderliness came from First, from the intrinsic unity of the undecomposable One. But he did not really have much to say about how the "undecomposable One" accomplished this task. What chaos theory tells us is that we need no mysterious force, no undecomposable One, to explain how orderliness emerges. We can see it emerging on our computer screens when we feed certain kinds of complex equations. It is simply built into the logic of the universe, it is what happens when a great many factors interact with each other over a long period of time. Complexity is all about Fourth. Chaos tell us how

Fourth pops out of Third, how structures synergetically arise out of simple relationships.

Mathematically, attractors are categorized into different types, of varying complexity. The simplest kind of attractor is a fixed point, which is a constant, steady state, or in physical terms, an equilibrium. It is a situation where a system keeps on doing the same thing, over and over again. For instance, once a pendulum stops swinging, and hangs limply at the bottom of its arc, it has reached a fixed point. Once a train of thought reaches an end, having arrived at a constant, unshakable conclusion, it has reached a fixed point. Many people's belief systems include fixed ideas of this type, which they defend despite any conflicting evidence.

The next most complex kind of attractor is a limit cycle, which is a periodic motion, oscillating around and around forever. Imagine a comet, zooming in toward the sun from outside the solar system then being captured by the sun. When it becomes entrained in orbit around the sun, it has reached a limit cycle attractor. Periodic attractors are very common in biological systems -- circadian rhythms, sleep/wake cycles, hormonal cycles, and so forth. Certain mental states, such as bipolar affective disorders (previously called manic-depression), are examples of the same kind of attractor.

Fixed points and limit cycles have been known for hundreds of years. But until computers made it possible to run simulations of complex systems over long periods of time, no one knew how many nonlinear, nonperiodic attractors there were. When scientists let their computer models run, they found that the systems often "locked into" strange but comprehensible patterns of behavior. The systems weren't repeating, they weren't oscillating -- they were wandering about, but in vaguely predictable ways. Their behaviors, graphed in appropriate diagrams, sketched out intriguing, complicated "fractal" pictures, rather than single points, circles, or random blurs.

Fractals are images in which the parts repeat the features of the whole. When you look at them through a microscope, they look much as they do with the naked eye. Or, on a larger scale, a coastline on the ocean has a fractal nature. If you view it from an airplane, you donít see a straight line, but a roughly shaped line of inlets, beaches and peninsulas. But if you look at a short stretch of the coastline, however, you will see a similar set of features. This makes it difficult to measure the actual length of a coastline, because it keeps increasing as you measure every nook and cranny. Of course, it is much simpler to just measure the distance a ship sailing along the coast would travel, but this does not capture the true nature of a coastline.

The behavior of a system, after it locked into a "strange attractor" behavior, is chaotic, in the sense that you can't predict exactly what itís going to do. You can tell what it was going to do in some approximate sense Ė itís going to stay within the strange attractor. Maybe thereís a general statistical structure to how it moves within the attractor. But you canít make exact predictions Ė just, though you can predict spring will follow winter, and will be followed by summer, you canít predict the weather on any particular day.

Once the concept of strange attractor was available, scientists found that it applied to many phenomena that they had previously thought were periodic with a certain amount of random error. For instance, the human heartbeat isnít actually periodic; it follows a strange attractor. People with heart disease have overly orderly, periodic heartbeats; healthy people's heartbeats are more chaotic. The behavior of individual ants in an ant colony is chaotic, rather than random or easily predictable. The behavior of the whole colony, however, is highly structured and much more predictable. Examples in physics and chemistry abound: weather systems, lasers, organic and inorganic chemical reactions, pendulums, etc. One system after another, which had previously been understood as basically periodic or basically random, turned out to display interesting "strange attractor" behaviors.

Strange attractors of all sorts have been found in computer simulations of the brain; and Walter Freeman has found good evidence for strange attractors in the olfactory cortex of the rabbit (the part of the rabbit brain that deals with the sense of smell). Strange attractors and chaos have been reported in human mood fluctuations. Many theorists have suggested a fundamental role for strange attractors in mind/brain dynamics, although this is difficult to prove mathematically because psychological data is much more limited than the data one can collect in the physical sciences. Precise quantitative measurement is often not possible, so we have to rely on the clinical observations of brilliant observers such as Carl Jung.

Mathematically speaking, only deterministic systems display attractors; real systems always have a random aspect and thus display only "probabilistic attractors" which have a small but definite chance of being escaped. But the basic idea is the same, and probabilistic attractors can be found in human affairs as well as in mathematics. Heroin addiction is one example, falling in love is another. Once you're in either of these states, it's really tough to get out; and once you're almost in one, you're in great danger of slipping in for real. But there's nothing absolute or definite about this -- there's always a probability of escaping the attractor, or of not being attracted in the first place. Real-world attractors are always probabilistic, chancy, not 100% definite. We cannot predict perfectly.

Nor do we need to. In the real world, exactitude is almost never necessary. Intelligent systems are generally concerned with observing emergent patterns in the world around them, and predicting the emergent patterns that will arise in the future. This is precisely the kind of task Webmind is designed to perform. Webmind, like all truly intelligent systems, has the ability to operate in environments that are unpredictable on the level of numerical details, yet moderately predictable on the level of emergent patterns. It is designed to find the attractors which emerge over time in complex systems.

The Evolving Mind

Another piece of the philosophical puzzle is evolution. Evolution is really an amazing example of complexity science at work. If I were challenged to give an example of a scientific theory, which applies, to all kinds of complex systems, from the immune system to the human brain to the global economy, I would point to the theory of evolution through natural selection. Evolutionary adaptation is ubiquitous because the basic logic is so simple. Only two things are required for evolution to occur. First, there must be a large number of competing entities, which can survive or not survive depending on circumstances in their environment. Second, these entities must have the ability to mutate or combine to form new entities. Over time, this process leads to the production of a population of entities that are adapted to each other and to their environment. This basic evolutionary dynamic plays a role in the human mind, and in Webmind, in many, many ways.

The basic logic of natural selection applies way beyond the context in which itís normally thought about -- the evolution of organisms and species. It pops up again and again and again, in one place after another, all throughout the living world. For instance, natural selection is the essence of Burnet's clonal selection theory of the immune system, the foundation of all modern immunology. Because of AIDS, immunology is constantly in the newspapers these days, but still, very few people know that the immune system actually evolves by natural selection Ė it evolves new antibody types every time it is presented with a novel antigen. Antibodies mutate and reproduce in the spleen and maybe other places, and then the ones that are best at attacking the bad guy, the antigen Ė the germ or virus or whatever Ė survive and get to mutate and reproduce some more. The immune system is remarkably intelligent, although its intelligence is limited to one task - protecting the body against infection. It solves a very hard mathematics problem -- optimizing new antibodies to defeat new infectious actors -- and it remembers its answers for the lifetime of the organism.

There are lots of other examples of evolution theory. You can look at societies as evolving. Way back in the mid 1800ís, Herbert Spencer applied the theory of natural selection to human societies Ė just about at the same time that Charles Darwin applied it to the evolution of biological species. Of course Spencerís work had some problems Ė he concluded that rich people are better than poor people, so that we should just let poor people die because theyíve been proven evolutionarily unfit, in order to encourage the further evolution of the human race. He didnít come to grips with the subtle dynamics of society and culture, which is an example of the interaction of evolution and ecology. Edmund O. Wilson's modern work on "sociobiology" is basically in the same vein as Spencerís one, except that Wilson is a vastly more careful scientist, and so heís avoided Spencerian "Social Darwinist" stupidity.

On more of a micro scale, some chemists are now studying how life evolved in primordial soups of chemicals. Some physicists are using evolutionary theories to explain how space-time and physical law developed at the beginning of the universe, beginning with a multitude of spacetimes and laws in mutual flux, out of which only the strongest, stablest -- ours -- survived. As I already mentioned, Gerald Edelman and other biologists have proposed that natural selection governs the organization of neural pathways in the brain.

Evolution has also invaded computer science. Of course, normally computer programs are created, not evolved, but thereís something called "genetic programming," in which computer programs are evolved by natural selection. In this approach instead of writing a program yourself, all you do is design objectives that you want the program to fulfill. The "genetic programming engine" will then do your work for you Ė itíll evolve a population of programs coming closer and closer to fulfilling the criteria you laid out. This approach has proved fairly effective for solving practical problems in engineering and computer science. For example, suppose you want to write a program to schedule trains. You need to define what constitutes a good scheduling algorithm. You need to write a little program that will evaluate, for any scheduling algorithm you throw at it, how good it will be at scheduling. Then the genetic programming engine will evolve a scheduling algorithm for you, by inventing some at random, selecting the best, and letting the best mate with each other and mutate. The mating and mutation of algorithms can be done in various different ways, but ultimately itís no more complicated than the mating and mutation of DNA strands.

We use genetic algorithms in Webmind, along with many other techniques. The real subtlety here is in having the system figure out what goals to evolve programs towards Ė once youíve got the technique mastered, the evolution of programs to satisfy stated goals is a fairly mechanical procedure, though sometimes a really slow process. Webmind is also evolutionary in some less explicit ways, aside from its use of genetic programming techniques for some purposes.

With all these cool applications, though, the theory of natural selection still has some problems, conceptual problems right at its core. A lot of people have noticed that the basic logic of natural selection seems circular. We say that the "fittest" survived, but how do we know that they were the fittest? Because they survived!

This question leads you into the interesting interactions between evolution and ecology. When you evolve a program using genetic programming, you set up a fitness function, which objectively evaluates each program in the population, independently of all the others. This is similar to thinking of human evolution as being driven by "intelligence" as a fitness function. You can envision evolution as a kind of judge who looks at each human, deciding how intelligent is it, and killing it if itís too stupid, or letting it mate and produce offspring if itís smart enough. In reality, though, who decides that "intelligence" is the quality humans are selected on? Humans survive to reproduce if they survive Ė they may survive due to intelligence or muscular strength or sneakiness or whatever. Itís us, analyzing the situation, who determine that intelligence is an indicator of survival.

When programs evolve in Webmind, they sometimes use an artificial, human-constructed fitness function. For instance, we evolve programs to use the past of one numerical time series to predict the future of an other financial time series, as part of Webmindís causal inference module. This is highly structured, artificial evolution. On the other hand, we also evolve programs to govern the systemís actions, in which case the fitness is just how happy the programs make the system. The mechanics of crossover and mutation are the same in both cases, but the fitness evaluation procedure is very different Ė in the one case itís structured and mechanical; in the other case itís fluid and environmentally-dependent, as in real-world natural selection.

One criterion I proposed in The Evolving Mind is: fit entities tend to have a lot of emergent pattern between themselves and their environment. This is one way of looking at natural selection. Organisms that generate a lot of emergent pattern tend to survive. Of course, this isnít the whole story, but itís a major component of fitness Ė both in real ecosystems and in the immune system and in the human mind and in Webmind. Emergent pattern is there between an antibody and an antigen, because the successful antibodies are the ones that have shaped themselves to match the antigensí shapes. Emergent pattern is there in the mind/brain: a part of the mind is successful not because of what it does on its own, but because of what it does together with other parts of the mind. Emergent pattern is there in organisms: An animal survives not because itís "good" in some abstract sense, but because its qualities match the qualities of other creatures in its environment. A fast predator survives only when in an environment with slow prey Ė the opposition pattern fast/slow is an emergent pattern. And so on, and so on, and so on.

The subtlety of fitness is observed all the time in the evolution of products and companies. Often it happens that the winning product isnít the "best" one, and this pisses people off. Whatís happening is that people are applying idealized fitness functions that donít actually correspond to real-world fitness. Look at the software industry for example. "Software quality" is a rough indicator of fitness, but not a great one. Windows has poor software quality but is a hell of a survivor, whereas Unix has greater quality but has been less successful at propagating itself. But Windows generates a great amount of emergent pattern together with other things in its environment Ė this is responsible for a lot of its success. The fact that there are so many Windows software products is a simple example of emergent pattern: put the software product together with Windows and you get a lot of emergence, in the form of working software! This is a great illustration of the idea that fitness has more to do with fitting into the environment and generating emergent pattern with the environment than with meeting idealized, isolated fitness criteria.

Evolution and Autopoiesis.

Weíve seen that evolution really only has meaning when you consider it in terms of ecology. But ecology is a pretty vague concept Ė it refers to the interconnection of different elements in a system, the adaptation of these elements to each other, and so forth. One system-theoretic concept that captures part of whatís special about ecology is autopoiesis.

Autopoiesis is a strange-sounding word. Basically it just means "self-developing" or "self-producing." The word was coined by biologists Francisco Varela and Humberto Maturana to describe how systems maintain and sometimes develop or even transform themselves. Autopoietic systems have several characteristics:

To think about the self-maintaining aspect of systems, consider the "qwerty" arrangement of letters on almost all computer keyboards. It was designed to place the most frequently used keys at different ends of the keyboard so that the keys of old fashioned manual typewriters would be less likely to jam. Itís not the best system around: With electronic typewriters and computers, experts agree that typing is easier and faster with a system such as the Dvorak keyboard, which groups the most frequently used keys in the center. With Dvorak, your fingers don't move as much, you go faster and make fewer mistakes. Computers can easily be switched to Dvorak, yet almost everyone continues to use qwerty. Why?

The answer is the strength of the larger system of which each of our keyboards are a part. It is best for most of us to use qwerty simply because most of the keyboards in the world are already using it. If you learn to use Dvorak on your computer at home, you will be unable to touch type on computers at your workplace, school or library. Qwerty survives despite the fact that no one makes a profit off of it. There is no financial conspiracy supporting it, in fact, no one really likes it. There is, on the other hand, a dedicated band of enthusiasts promoting Dvorak (you can easily find them by searching the WEB).

Qwerty survives, not because of intrinsic merit, but because itĎs part of a self-sustaining system. This is a common economic phenomenon which economists refer to by the term "market externality." Qwerty has not, however, developed or improved itself in any way. The letters are still where they were when it was created. It does not, therefore, exhibit all of the characteristics of a true autopoietic system.

Computer programs like Windows, Word or Excel also benefit from market externalities and may exhibit more of the characteristics of autopoiesis. Many people use them, even though other programs may actually be better, simply because so many other people do. At the same time, the programs do develop and improve to keep up with the competition. They allow for the development of macros which extend their range and, as they grow, they incorporate additional features. Thus, they could be said to be more autopoietic, though this is engineered into them by their producers rather than developing spontaneously.

All complex systems exhibit some degree of autopoiesis, though the ways in which they accomplish this vary. The human body consists of a collection of interconnected parts precisely designed so as to be able to support each other. Unlike starfish, though, we donít have the ability to grow a replacement arm or leg if we lose one (though we may wear an artificial one). I believe the liver is the only human organ with the capacity for massive tissue regeneration. Although we canít replace lost brain tissue, surviving parts of the brain can sometimes take over the functions of parts which have been lost. Some theorists think of the brain as equivalent in its organization to a "hologram," a three-dimensional picture each part of which contains an image of the whole. If part of a hologram is lost, it can be partially reproduced from the other parts. The brain doesnít function exactly like a holographic picture, but it does have this characteristic of redundancy accomplished by storing information about the whole in many of its parts.

The modern market economy may be a better example of an autopoetic system, actually. There is no central planning authority, traditional obligations are minimal, yet all the goods which anyone is willing to pay for are generally available because someone chooses to make a profit by meeting the demand. New products are continually emerging, often first as accidental byproducts of older processes. The ability of the market economy to adapt and develop has proved to be vastly greater than Karl Marx and his supporters anticipated.

Another example -- which I wrote about in my book Chaotic Logic and my father discussed in his book Turncoats and True Believers -- is the belief system. If youíve ever with a religious fundamentalist or a conspiracy theorist, youíve felt the autopoietic nature of belief systems. Every point in the fundamentalist's argument is backed up by at least fourteen other points in his or her argument The belief system is often so firmly defended that it is completely impervious to contrary evidence. And if some gap in the argument should emerge, new evidence is quickly found or invented. Belief systems do evolve and change, but in ways which strengthen the core principles. There are always new ways to say "God exists because it says so in the Bible; and the Bible is believable because God wrote it!"

Although autopoiesis is usually a force for stability, it also plays an important rule in change. This is a fairly subtle point but itís a crucial one for understanding mind. The most exciting thing about evolution isnít the struggle for survival among existing forms, but the creation of new forms of existence. Religious people believe that the awesome beauty, complexity and orderliness of the universe proves that it must have been created by a Supreme Being. Darwin challenged their belief with the theory that the natural order evolved over a long period of time. But his theory of natural selection didnít seem quite adequate to explain this. It explained how some organisms replaced others, but where did these new organisms come from in the first place? Mutation supplies part of the answer, but how does mutation generate new forms that are stable and functional enough to grow and reproduce? How does a cat come about through mutation of non-cats, or mating of non-cats? Evolution, in itself, clearly explains how you get new breeds inside the same species, but doesnít quite explain the origin of new species, which is what Darwin really wanted to explain.

Autopoiesis fills the gap: It explains how even the smallest innovation or change, which may arise purely at random, can mushroom into a self-sustaining, self-reproducing system with remarkable persistence. This is clearly the case in today's high-tech economy, where a small innovation like the Web browser can catalyze the development of a multibillion dollar industry. The personal computer was originally a hobbyist's toy, now, a couple decades later, itĎs one of the dominant factors in the world economy. In biology, a small innovation, like the ability to gulp air for a short time or crawl around on a mutated fin, can lead to a whole panoply of air breathing land animals. One small thing changes, but in a system where all the parts interlock and create each other, this little change leads to the re-creation of the whole system. Thatís autopoiesis acting in synch with evolution.

If you think about it philosophically, you can see autopoiesis and evolution as two fundamental forces in the universe. Autopoiesis is Being; evolution is Becoming. Autopoiesis is a force of conservation and maturation. It is the means by which structures preserve and refine themselves. An autopoietic system is not a perpetual motion machine in the ordinary physics sense, but in another sense it acts like one. It dissipates energy, but it conserves pattern and structure, making only changes which strengthen the existing structure. On the other hand, evolutionary selection is a more destructive and revolutionary force. It destroys patterns which may have survived for eons, replacing them with new forms. Evolution and autopoiesis are opposed to each other, but they also cooperate with each other. You need both Being and Becoming to make a mind or a world.

Autopoiesis is central to the human mind. The mental entities psychologists refer to as "self," "personality," "complex," "belief systems," "neurosis," "psychosis," and so on -- all these develop autopoietically. Theyíre self-developing, self-reinforcing, self-maintaining, and often remarkably persistent. Changing them can be really hard even when they cause considerable mental distress and dysfunction Ė because they penetrate through the whole mind, created by the rest of the mind and creating it in turn.

Webmind was designed to achieve autopoiesis and evolution together Ė a prerequisite for the emergence of mind. As Webmind nodes interact with each other, some ideas are reinforced, while others are discarded. The ideas that are most consistently reinforced become self-sustaining and Webmind seeks out additional information and knowledge to sustain them. This is in many ways a chaotic and unpredictable process, which is what makes Webmind entirely different from any software which simply follows rules programmed in by its creator.
 

The Psynet Model of Mind

So letís cut to the chase.  Prior to the formation of Webmind Inc., inspired by Peirce, Nietzsche, Leibniz and other philosophers of mind, I spent many years of my career creating my own ambitious, integrative philosophy of mind.  After years searching for a good name, I settled for ďthe psynet modelĒ instead Ė psyfor mind, net for network.

According to the psynet model of mind:
   1.      A mind is a system of agents or "actors" (our currently preferred term) which are able to transform, create & destroy other actors
   2.      Many of these actors act by recognizing patterns in the world, or in other actors; others operate directly upon aspects of their environment
   3.      Actors pass attention ("active force") to other actors to which they are related
   4.      Thoughts, feelings and other mental entities are self-reinforcing, self-producing, systems of actors, which are to some extent useful for the goals of the system
   5.      These self-producing mental subsystems build up into a complex network of attractors, meta-attractors, etc.
   6.      This network of subsystems & associated attractors is "dual network" in structure, i.e. it is structured according to at least two principles: associativity (similarity and generic association) and hierarchy (categorization and category-based control).
   7.      Because of finite memory capacity, mind must contain actors able to deal with "ungrounded" patterns, i.e. actors which were formed from now-forgotten actors, or which were learned from other minds rather than at first hand Ė this is called "reasoning" (Of course, forgetting is just one reason for abstract (or ďungroundedĒ) concepts to happen.  The other is generalization --- even if the     grounding materials are still around, abstract concepts ignore the historical relations to them.)
   8.      A mind possesses actors whose goal is to recognize the mind as a whole as a pattern Ė these are "self"

A System of Actors having relationships one with others and performing interactions one onto another. A relationship is a piece of data stored in the system's memory, recording some kind of relationship between entities...  An interaction is an event...

According to the psynet model, at bottom the mind is a system of actors interacting with each other, transforming each other, recognizing patterns in each other, creating new actors embodying relations between each other.   Individual actors may have some intelligence, but most of their intelligence lies in the way they create and use their relationships with other actors, and in the patterns that ensue from multi-actor interactions. We need actors that recognize and embody similarity relations between other actors, and inheritance relations between other actors (inheritance meaning that one actor in some sense can be used as another one, in terms of its properties or the things it denotes).   We need actors that recognize and embody more complex relationships, among more than two actors.    We need actors that embody relations about the whole system, such as ďthe dynamics of the whole actor system tends to interrelate A and B.Ē  This swarm of interacting, intercreating actors leads to an emergent hierarchical ontology, consisting of actors generalizing other actors in a tree; it also leads to a sprawling network of interrelatedness, a ďweb of patternĒ in which each actor relates some others.  The balance between the hierarchical and heterarchical aspects of the emergent network of actor interrelations is crucial to the mind.


 

A self-organizing Actors Operating System, relying on perceptual data coming through Short Term Memory, translated into system patterns. In turn, these  patterns are subjects of meaning sharing and reasoning driven by system dynamics toward emergence of Self.

All of us involved in the project believe that the Webmind AI Engine, once fully implemented and tested, will lead to a computer program that manifests intelligence, according to the criterion of being able to carry out conversations with humans that will be subjectively perceived as intelligent.  It will demonstrate an understanding of the contexts in which it is operating, an understanding of who it is and why it is doing what it is doing, an ability to creatively solve problems in domains that are new to it, and so forth. And of
course it will supersede human intelligence in some respects, by combining an initially probably modest general intelligence with capabilities unique to digital computers like accurate arithmetic and financial forecasting.

All the bases are covered in the design given here: every major aspect of the mind studied in psychology and brain science.  Theyíre all accomplished together, in a unified framework.  Itís a big system, itís going to demand a lot of computational resources, but thatís really to be expected; the human brain, our only incontrovertible example of human-level intelligence, is a complex and powerful information-processing device.

Not all aspects of the system are original in conception, and indeed, this is much of the beauty of the thing. The essence of the system is the provision of an adaptable self-reconstructing platform for integration of insights from a huge number of different disciplines and subdisciplines.  In Webmind aspects of mind that have previously seemed disparate are drawn together into a coherent self-organizing whole.   The clichéí Newton quote, ďIf Iíve seen further than others, itís because Iíve stood on the shoulders of giantsĒ inevitably comes to mind here.  (As well as the modification I read somewhere: ďIf others have seen further than me, itís because giants were standing on my shoulders.ĒÖ.)  The human race has been pushing toward AI for a long time Ė the Webmind AI Engine, if it is what I think it is, just puts on the finishing touches.

While constructing an ambitious system like this naturally takes a long time, we were making steady and rapid progress until Webmind Inc.ís dissolution in early 2001.  It seems Arthur C. Clarke was off by a bit -- Webmind wonít be talking like HAL in the film 2001 until a bit later in the millennium.  But if the project can be rapidly refunded, before the group of people with AI Engine expertise dissipates, we can expect Baby Webmindís first moderately intelligent conversations sometime in the year 2002, and thatís going to
be pretty bloody cool!