This is an excerpt from an unpublished manuscript by Ben and Ted Goertzel - all rights reserved.

Webmind's Digital Environment

When I first started thinking about building Webmind, I thought specialized new equipment would be necessary -- possibly some kind of biological computer: a cellular or crystalline machine that grew its own memory and processing adaptively as it thought. But when I looked into it, I found that this technology was not likely to ready in less than a couple of decades. So I looked into the possibility of using massively parallel supercomputers -- computers that, like the brain, do many, many things at once. These had the great merit of already existing, although there were few of them and it was difficult to get access to them.

Fortunately for me, the University of Nevada got a Cray Y-MP while I was teaching mathematics there, as part of Nevada's payoff from the federal government for agreeing to accept nuclear waste no one else would take. The Cray was looked impressive -- a bunch of shiny obelisks in a room by themselves. It was very fast, of course, and surprisingly easy to use. But its usefulness for me was limited by its design. It had thousands of processors, but they all had to do the same thing at the same time. This was great for doing math, but not so great for doing mind.

A few years later, when I was at the University of Western Australia, I was able to get access to the Australian National University's Connection Machine. This supercomputer had a better architecture, because each processor could do its own thing, independently of the others. But it was very awkward to program and, all in all, it wasn't really that much faster than a network of Unix workstations, or top-end PC's. It had its uses: a colleague of mine used it to make a neat movie of the 4-dimensional Mandelbrot set, a beautiful fractal. But these expensive parallel processing supercomputers were often underutilized because it was so difficult to write programs that really took advantage of their advanced capabilities. I didn't have the time to program Webmind myself on this machine, nor the resources to hire professional programmers to do it. So I contented myself with writing books about how it should be done, while trying to get research grants to fund the necessary programming.

By the early and mid 1990's, however, I was amazed to see that ordinary PC's were rapidly catching up with supercomputers in computing power. Networks of PC's were outrunning multimillion dollar behemoths. I began to sour on supercomputers as I had soured on biological computers. Both were great ideas in principle, but had fallen prey to economic realities. Engineering something efficiently takes money, and money follows the market. The market was there for PC's, and so PC's had attracted engineering effort and had become fabulously efficient and flexible and usable. The first brain for my digital mind, I realized, would probably be a network of lowly PC's, with perhaps some high-end servers thrown in. I would have to innovate by making ordinary hardware do extraordinary things.

You might wonder why these hardware details matter. After all, Webmind is founded on an abstract theory of mind, a theory that has nothing to do with the physical medium that is used. In principle, it doesn't matter if the psynet model runs on a brain, a network of PC's, or a supercomputer. But I was tired of writing books that were read by a small band of fellow enthusiasts. I wanted to prove that my system could really work, and that meant I had to adapt my theories to computer networks as they existed in the late 1990s. If I had been able to implement Webmind on different hardware – say, a humongous mainframe or a brainlike supercomputer – it would have been a very different system, similar to Webmind on the abstract level but different in almost every detail.

Of course, this is true for humans as well, our minds can function only when our brains are healthy and intact. The human mind is a product of nerve cells, neurotransmitters, blood, air, and water. Our human concept of intelligence is body-centric, physical-world-centric. Webmind is a product of computer hardware, an operating system, and a very important software program: the Java virtual machine. Webmind’s world is the Net. It doesn’t have eyes, it has datafeeds. It doesn’t have cells, it has RAM. Just as we exist in a world of biological organisms, Webmind exists in a world of computer software.

In this chapter I will discuss the hardware and software environment that Webmind lives in, with a particular emphasis on the Java language and what makes it especially appropriate for creating Webmind, and creating the global brain. This gives the background that is necessary to understand the Webmind architecture itself.

The History of Hardware and the History of AI

In Chapter Two, we saw how two sharply divergent approaches to artificial intelligence emerged over the last half century. The first emulated the brain with neural nets, the second emulated the mind with logical rules. The neural net approach hoped to reproduce the brain's complex processes of self-organization. The rule-based approach sought to reproduce the logical and linguistic operations of the conscious mind. Both were important advances, but neither was able to achieve true intelligence. This was partly due to theoretical limitations, but also to the limitations of the computer hardware which was available at the time.

Back in the 1940's and 50's, each of these approaches to AI were tied to a different computer architecture. The neural net researchers used a design in which a large number of tubes, transistors and circuits interacted with each other at the same time. This was analogous to the brain, where millions of neurons in the brain interact with each other simultaneously. In computer language, we call this a parallel, distributed, analog computer architecture. Parallel means that many things go on at the same time. Distributed means that processes are spread out among a number of different units. Analog means that quantities are measured and manipulated physically instead of being converted into digits or numbers - an old fashioned phonograph record in analog, with a needle moving up and down physically in a groove, while a CD-ROM music disk is digital, with a laser beam counting microscopic dots.

The rule-based researchers AI researchers used a serial, digital computer architecture in which the computer does one thing at a time, very quickly, then goes on to do something else. This seemed analogous to the logic of the higher functions of the mind, when a thinker focuses on one logical task, completes it, and then goes on to another. This may not have been the way the brain worked, but it turned out to be a practical way to organize a computer for many tasks. As computer technology developed, the serial, digital design prevailed; computer manufacturers and designers concluded that it was the best way to build inexpensive, powerful computers to do the tasks people wanted done.

This development in computer manufacturing might have meant the death of neural net AI, because the machines it used were not keeping up with the researchers' programming needs. But it didn't work out that way. Instead, neural net programmers adapted to the trends in computer design. Neural nets, genetic algorithms and Webmind thrive because they have learned how to use serial computers to carry out programs which logically would run better on a parallel design. For the larger, more complex programs this has been possible because serial computers can be clustered together in networks. Each of them still does only one thing at a time, but any number of them can be running at once. They are able to work independently and still be linked together, which makes them more like the architecture of the brain. The brain has groups of neurons that work independently, but are able to exchange information with other groups of neurons at any time.

In the future, we may reconsider the decision to rely on serial, digital computer design. Artificial intelligence might be better programmed on parallel, distributed, analog computers which allow many things to be done at each point in time, at many different points in space. Few such systems are in use today, but back in the 40's and 50's and 60's, they could be found in industry and academia, and some engineers thought they were the wave of the future. Conceptually, these systems have a lot to be said for them. Memory is dynamic and is not fundamentally distinct from input/output and processing. Furthermore, the basic units of information are not all-or-nothing switches, but rather continuous signals, with values that range over some interval of real numbers. This is how things the brain works: the brain is an analog system, with billions of things occurring simultaneously, and with all its different processes occurring in an intricately interconnected way.

In the serial, digital computer, by contrast, there is a central processor that does one thing at each time, and there is a separate, inert memory to which the central processor refers. This architecture was adopted for practical engineering reasons, not on theoretical or philosophical grounds. John von Neumann, the pioneer computer scientist who invented the serial architecture, was actually a champion of neural-net-like models of the mind/brain. He was an interdisciplinary super-genius who wrote a book explaining how self-reproduction could emerge from a fairly simple self-organizing system (a precursor of modern "cellular automaton" models), thus using complexity science to debunk the notion of a "vital force" contained in living beings, endowing them with the power of reproduction. He had drunk the complexity Kool-Aid before almost anyone else. But he was also a practical man, spending much of his time on government military contracts work, and he wanted to design a machine that could be easily built and tested. He succeeded, and now his design is ingrained in the hardware and software industries, just as thoroughly as, say, the internal combustion engine is ingrained in the automobile industry. Most likely, every computer you have ever seen or heard of has been made according to the von Neumann architecture.

Neural net and other emergent systems approaches to artificial intelligence have survived the dominance of the von Neumann architecture by relying primarily on serial digital simulations of parallel distributed analog systems. In effect, programmers get the von Neumann machines to pretend that they are parallel, analog machines. The fact that neural net and emergent systems AI has flourished in this fundamentally hostile computing hardware environment is a tribute to the fundamental soundness of its underlying ideas. I doubt very much, on the other hand, that rule-based AI would have ever become dominant or even significant in a computing environment dominated by parallel processing hardware. In such an environment, there would have been strong pressure to ground logical rules in underlying network dynamics. The whole project of computing with logic, symbolism and language in a formal, disembodied way, might never have gotten started.

It's possible that the choice of the von Neumann architecture may have been a mistake -- that computing would be far better off if we had settled on a more brain-like, cybernetics-inspired hardware model back in the 1940's. The initial engineering problems might have been greater, but they could have been overcome with moderate effort. The billions of dollars of money spent on computer R&D in the past decades would have been spent on brainlike computers rather than on the relatively unmindlike, digital serial machines we have today. In practice, however, no alternate approach to computer hardware has yet come close to the success of the von Neumann design. All attempts to break the von Neumann hegemony have met with embarrassing defeat.

And not for lack of trying! Plenty of different parallel-processing digital computers have been constructed. I've already mentioned the two that I've briefly worked with: the restricted and not very brainlike "vector processors" inside Cray supercomputers, and the more flexible and AI-oriented "massively parallel" Connection Machines manufactured by Thinking Machines, Inc. The Cray machines can do many things at each point in time, but they all must be of the same nature. This approach is called SIMD, "single-instruction, multiple dataset": it is efficient for scientific computation, and some simple neural network models, but not for sophisticated AI applications. The Thinking Machines computers, on the other hand, consist of truly independent processors, each of which can do its own thing at each time, using its own memory and exchanging information with other processors at its leisure. This is MIMD, "multiple instruction, multiple dataset"; it is much closer to the structure of the brain. The brain, at each time, has billions of "instructions" and billions of "data sets"!

These parallel digital machines are exciting, but, for a combination of technical and economic reasons, they have not proved as cost-effective as networks of von Neumann computers. They are used almost exclusively for academic, military and financial research, and even their value in these domains has been questioned. Thinking Machines Inc. has gone bankrupt, and is trying to re-invent itself as a software company; their flagship product, GlobalWorks, is a piece of low-level software that allows networks of Sun workstations to behave as if they were Connection Machines. (Sun workstations are high-end engineering computers, running the Unix operating system and implementing, like all other standard contemporary machines, the serial von Neuman model).

With GlobalWorks, all the software tools developed for use with the Connection Machines can now be used in a network computing environment instead. There is a serious loss of efficiency in doing so: instead of a network of processors hard-wired together inside a single machine, one is dealing with a network of processors wired together by long cables, communicating through complex software protocols. However, the economies of scale involved in manufacturing engineering workstations means that it is actually more cost-effective to use the network approach rather than the parallel-machine approach, even though the latter is better from a pure engineering point of view.

And there are still companies producing analog, neural net based hardware -- radical, non-binary computing machinery that is parallel and distributed in nature, mixing up multiple streams of memory, input/output and processing at every step of time. For instance, the Australian company Formulab Neuronetics, founded in the mid-80's by industrial psychologist Tony Richter, manufactures analog neural network hardware modeled fairly closely on brain structure. The Neuronetics design makes the Connection Machine seem positively conservative. Eschewing traditional computer engineering altogether, it is a hexagonal lattice of "neuronal cells," each one exchanging information with its neighbors. There are perceptual neurons, action neurons, and cognitive neurons, each with their own particular properties, and with a connection structure loosely modeled on brain structure. This technology has proved itself in a variety of process control applications, such as voice mail systems and internal automotive computers, but it has not yet made a splash in the mainstream computer industry. By relying on process control applications for their bread and butter, Neuronetics hopes to avoid the fate of Thinking Machines. But the ultimate ambition of the company is the same: to build an ultra-high-end supercomputer that, by virtue of its size and its brainlike structure, will achieve unprecedented feats of intelligence.

As of now, this kind of neural net hardware is merely a specialty product. But it's possible that, as PC's fade into history, these analog machines will come to play a larger and larger role in the world. In the short run, we might see special-purpose analog hardware used in the central servers of computer networks, to help deal with the task of distributing information amongst various elements of a network computing environment. In the long run, one might see neurocomputers joining digital computers in the worldwide computer network, each contributing their own particular talents to the overall knowledge and processing pool.

The history of AI and computer hardware up until today, then, has been a somewhat sad one, with an ironic and optimistic twist at the end. The dominant von Neumann architecture is patently ill-suited for artificial intelligence. Whether it is truly superior from the point of view of practical engineering is difficult to say, because of the vast amount of intelligence and resources that has been devoted to it, as compared to the competitors. This is autopoiesis in action. The von Neuman architecture has incredible momentum -- it has economies of scale on its side, and it has whole industries, with massive collective brainpower, devoted to making it work better and better. The result of this momentum is that alternate, more sophisticated and AI-friendly visions of computing are systematically squelched. The Connection Machine was abandoned, and the Neuronetics hardware is being forced to earn its keep in process control. This is the sad part. As usual in engineering, science, politics, and other human endeavors, once a certain point of view has achieved dominance, it is terribly difficult for anything else to gain a foothold.

The ironic and possibly optimistic part, however, comes now and in the near future. Until now, brainlike parallel architectures have been squelched by serial von Neumann machines -- but the trend toward network computing is an unexpected and unintentional reversal of this pattern. Network computing is brainlike computer architecture emerging out of von Neumann computer architecture. It embodies a basic principle of Oriental martial arts: when your enemy swings at you, don't block him, but rather position yourself in such a way that his own force causes him to flip over.

In the long run, it may be inevitable that computer architecture evolves along lines resembling the structure of the brain, because the structure of the brain itself evolved along lines dictated by the fundamental metaphysics of intelligence. We took a turn away from brain-like computer architecture back in the 1940's, rightly or wrongly, but now we are returning to it in a subtle and unforeseen way. The way to do artificial intelligence and other sophisticated computing tasks is with self-organizing networks of intercommunicating processes -- and so, having settled on computer hardware solutions that do not embody self-organization and intercommunication, we are impelled to link our computers together into networks that do.

Programs and Operating Systems

A von Neumann computer's working memory is called its RAM, Random Access Memory, which means that the computer can access any part of the memory with roughly equal ease, at any time. Reading and writing to the memory and to peripheral devices (screen, keyboard, mouse, printer, etc.) is controlled by the CPU, the Central Processing Unit, which does basic arithmetic and logic operations on numbers stored in a few special "registers," and swaps numbers between the registers, the memory, and peripherals. All this is not at all how the human brain works -- in the brain, memory and processing are all mixed up, and each little bit of memory is always doing its own little bit of processing, at the same time as all the other little bits of memory. In the computer, on the other hand, a vast RAM is serviced by a CPU that act on a tiny bit of it at each point in time -- but the CPU acts very fast - much, much faster than any part of the brain.

But most programs that run on your computer don't need to know much about the RAM or the CPU, any more than you need to as a software user. What allows this ignorance is the OS, the operating system, the program that controls a computer’s hardware. Windows 98, Windows 3.1, DOS, Mac OS, Linux, and Solaris are all operating systems. In early computers, operating systems were written in machine language, a language consisting of 0’s and 1’s, that directly deals with the numbers in the registers acted on by the CPU. In fact, in the very earliest computers, all programs were written in machine language. Computer programs were written in binary codes. For example, "00" often meant STOP the machine, "01" meant READ in a number from the keyboard, "02" meant ADD two numbers together, "03" meant DISPLAY a number on the screen. And so on. Computer programs were lines of zeroes and ones like this:

01 00 15

02 00 19 00 18

03 00 19

00.

The first programmers actually wrote their codes in these binary digits, but they jotted down letters to remind themselves of the code, e.g., A means ADD which is 02 in a particular digital programming language. Then one fine day, in 1948, a programmer named Maurice Wilkes, in Cambridge England, got the bright idea that the computer could be taught to remember these letters and convert them to digits. He had an assistant type up some code to do this, and from then on programmers could use words, and let the machine convert them into binary digits. Thus assembly language was born, and programming became much quicker and easier, although the programmer still had to know exactly where each bit of information was going in the memory of the machine. (This history is recounted and explained helpfully in Daniel Kohanski's book, The Philosophical Programmer: Reflections on the Moth in the Machine).

The next step was to develop "high-level" languages, which were generally known as compilers. These programs allowed the programmer to give the computer precise instructions about what to do, but allowed the computer to figure our many of the details about how to do it. The first major high-level language was FORTRAN (for "formula translator") invented by John Backus at IBM in 1953. In FORTRAN one can write mathematical formulas using words and numbers, such as: PROFIT = REVENUE - EXPENSES. You can then put these formulas into "DO loops," which causes the computer to do the same computation over and over again but with different data each time. This meant that researchers who knew little about computers could learn how to punch their instructions onto cards, turn them into the computer center, and get results back on a printout. This is how my father, a sociologist who knew very little about computer architecture, was working when he first took me to visit the university computer center at Rutgers University in Camden, N.J.

The early personal computers came with a program called BASIC which was a simplified version of FORTRAN, adapted for use with a keyboard and a screen. Users could write their own programs and see the results immediately on the screen without having to bother with taking cards to a computer center. Soon punched cards became obsolete, and all computers worked with keyboards and screens.

But long after most programs were written in higher level languages, the Operating Systems were still coded directly in assembly language. This enabled the programmers to make the most efficient use of the very limited memory in the computers, relying on tricks that sometimes come back to haunt us today. For example, many of them abbreviated the dates into two digits to save a little computer memory, never worrying about what would happen when the year 2000 came around. Today however, computer memory is cheap, so there is no need to squeeze things in so tightly. Operating systems like Windows have become tremendously complex, too complex to be written in assembly language. Parts of them are written in assembly language, and the other parts talk to the machine via the assembly language parts. The other parts are written in some high-level programming language, usually C, or C++, a C variant.

Still, as complex as an OS for a von Neumann machine is, the problem of writing an OS for a brainlike massively parallel computer is much, much harder. One of the reasons for the dominance of the von Neumann machine is the comparative ease of writing operating systems and programs for it. Each generation of computer scientists seems to learn anew that writing programs for massively parallel machines is really tough. In a way, one might say, von Neumann machines are made for programming, whereas brainlike machines are made for evolving and learning. Programming a brain is just as hard as getting an ordinary digital computer to evolve and learn. In building Webmind we have walked a fine line between these two difficult tasks, programming just as much as we need to and leaving the rest to evolution and learning. This "middle way" suits the hardware platform that Webmind runs on, which is the computer network, a kind of half-serial, half-parallel entity.

Webmind, like Windows, could not practically be programmed in assembly language; it is sufficiently complicated that it has to be programmed in a high-level programming language like C or Java. High level languages use words and concepts like variables, functions, if-then, repeated loops, and so forth, which may not be familiar to the average person in the street, but are common knowledge among mathematicians and technical types. High-level languages allow us humans to program computers to do very complicated things that would take decades or centuries to map directly into the 0’s and 1’s of machine language.

Some high-level languages are specialized for particular types of programs – for instance, FORTRAN is great for math and engineering calculations; whereas LISP and PROLOG are made for rule-based, logic-based AI. C was written originally for systems programming, for writing OS’s and related things, but has since been widely adopted for nearly everything. C places few restrictions on the programmer – it’s closer to machine language than the other high-level languages – and so C programs tend to run faster than programs in other high-level languages; this is part of the explanation for C’s popularity.

The Java Virtual Machine

And now we get to the heart of the matter as far as Webmind is concerned: Java. Just as the OS provides a layer of abstraction on top of the hardware, so Java provides a layer of abstraction on top of the OS. A Java program doesn’t talk to the operating system at all. Instead, it talks to a special program called the Java Virtual Machine (JVM). The JVM is a kind of virtual reality for computer programs. Java programs don’t have to consider the hardware or OS of the computers they will be run on. They are written as if they were going to run on a Java machine - a machine that exists only as a software program. This software program, the Java Virtual Machine, talks to the OS, and translates the things the program has asked it to do (get information from a file, write a line on the screen, whatever) into the language of the OS, which in turns translates these requests into the digital language of the machine. It is very complicated programming, but once it is working it makes the lives of programmers much easier. As the slogan goes, it allows them to, "write once, run anywhere." The same program can run on any machine, any OS, so long as that machine and OS have a working Java Virtual Machine.

In practice, "write once, run anywhere" runs into some limitations – a few obscure but crucial technical aspects of the JVM differ on different OS’s, and user interfaces need to be tested and tweaked on different platforms, even though the same program code works on all platforms. But still, Java is revolutionary. Just as high-level languages were a huge step beyond machine language, so Java is a huge step beyond platform-dependent computing. For the Internet, it is a thing of pure beauty. The same bit of Java code, serving as a basic global brain cell, can run on any machine, anywhere, using the same platform-independent program code to talk to other global brain cells. The global brain could be built without Java, by hodge-podging together C code for different OS’s and platforms. But it would be a lot messier, and a lot harder, and it would break down a lot more.

Object-Oriented Programming.

In programs such as FORTRAN and BASIC, there are standardized procedural commands. These commands are then used to do things to data, which are kept in a separate place. For example, if we want to sort a list, we call up on the SORT command and tell it which list needs sorted. This procedural approach works well enough as long as you have only a few procedures, and most of your data is in a limited number of formats. But one person usually has to understand the whole program and how everything fits together. A good programmer adds a lot of "comments" to the program so another programmer can figure it out in the future. When this is not done well, it is hard to figure out what a program is doing, as programmers found when they had to modify many old programs to solve the "Y2K" problem.

As programs got more and more complex, and more people worked at the same time, programmers began using a new approach known as "object-oriented" programming. Instead of keeping the procedures in one place and the data in another, they are bundled together into entities called "objects." If we have some data that might need to be sorted, we bundle it together with a sorting algorithm which is tailored for that kind of data. When another programmer needs that data to be sorted, he lets it sort itself based on its own sorting algorithm. In a project involving many programmers, each programmer writes his own objects, and defines ways for his objects to interact with other programmers’ objects. This makes it more difficult for programmers to sink into the style affectionately terms "spaghetti code" – every bit of the code referencing every other bit of the code in a completely willy-nilly way.

Procedurally-based programming worked well enough as long as programs were simple enough to be controlled by a single set of rules that could be understood by a single programmer. They worked well for rule-based Artificial Intelligence programs, and they had the same limitations: they could not deal with a situation where the rules were constantly changing and evolving. Object-oriented programming languages, such as C++ and Java, are the most widely used programming languages today because they work for projects which are too complex to be handled by a fixed list of procedures. For the same reason, object-oriented programming is ideal for programming the psynet model of mind.

But how does object-oriented programming make sense of all this complexity? How can the programs produce the tremendous number of objects that are needed for a complex program such as Webmind? If each object had to be created from scratch, without any guidelines, the task would be impossible. Just as human thought would be impossible if we began with a blank slate every day, not following any general patterns. In Chapter Three, we saw that human thinking often follows archetypal patterns that recur again and again. In an object-oriented program, one follows the same model. One has to create archetypes, general patterns that are used as models for objects. In the Java and C++ languages, these archetypes are called "classes." The classes don't just emerge, as the archetypes seem to in the human brain. Programmers write them. When a programmer writes the definition of a "class," he or she specifies the general traits that apply to all members of that class.

A very simple example may help. If one were to define a class called "box," one would specify that each box has a length, a width and a height. Furthermore, one could specify that the volume of the box should be computed by multiplying the three dimensions together. This is an archetype of a box; to have a real box you need only to specify the values for the length, width and height of a specific box. The computer will then give you the volume, even if you've forgotten the formula. The "class" is the archetypal box, the "objects" are all the specific boxes you choose to create. This is called object-oriented program because the description of a box and the formula for computing the volume of the box are stored in the same place in the computer - the computer can treat them as one thing or "object."

Of course, this example is very simple. We could easily add to the box "class" by including include the fact that each box has a color and is made of cardboard, wood or some other material, and of material of varying thickness. And we could add in formulas to compute the strength of the box as a function of its thickness and the material of which it is made. We could add handles, labels, etc. etc. Before you know it, we might have a program which would be of some use to a box manufacturer. In real programs, the classes are often very complex, and they may include a number of sophisticated computations. It is often better, however, to keep some of the objects as simple as possible and simply create a lot of them. A large program such as Webmind consists of many hundreds of classes, and, depending on the size of the Webmind, millions or hundreds of millions of objects in these classes.

Each object takes up a certain amount of RAM, and each object occupies the CPU’s attention when its methods are called, i.e. when the method currently in charge of the CPU asks it to use its methods to act on its data. Generally there will be many more objects than CPU’s – a normal computer has only one CPU, and the computers we generally run Webmind on today have only 4 CPU’s. So every object can’t do its thing at the same time. Java does offer something called "multithreading" which is a way of telling the CPU how to switch between one object’s methods and another’s. But right now Java’s multithreading doesn’t work as well as one would like, and writing a Java program involving hundreds of millions of objects that all constantly want to do things is exceedingly difficult, though not impossible. We've done it, and so have maybe a few dozen other groups of programmers, working on projects of various sorts.

When an object isn't used anymore, it has to be thrown out -- the RAM has to be told to free up the memory that it uses, to make room for other objects. This is called "garbage collection." One of the biggest differences between Java and C++ is that in C++ each program has to do its own garbage collection. When an object a program creates isn't useful anymore, the program has to explicitly tell the computer to free up the block of Ram it was using. In Java, on the other hand, this is done automatically: the Java Virtual Machine spots objects that are no longer used by anything and zoinks them. This results in programs that run a bit slower, because sometimes the JVM takes longer than necessary to spot useless objects. On the other hand, it results in much less buggy code, and it reduces coding time significantly. It makes Java much easier and more efficient to work with.

Webmind Objects

Webmind involves lots of different kinds of Java objects, but the most important kind of object in Webmind is the Node. These Nodes are, very roughly speaking analogous to neuronal groups in the brain, which interact with each other to create thoughts. Each WebmindTM node contains some data, and some methods that act on this data, and some methods that tell it how to interact with other WebmindsTM. All this is generally analogous to how the brain functions; in the brain neuronal groups interact with each other, causing changes in each other. Webmind'sTM intelligence emerges from the interaction of the nodes, just as human intelligence emerges from the interaction of specialized groups of neurons. Nodes contain links, which point to other nodes, and tell the node which other objects its methods should act on. On this very abstract level, Webmind’s nodes are like Nietzsche’s "dynamic quanta", or Peirce’s "habits." They’re just bundles of mind-stuff, and each one acts on the other ones to which it stands in "a peculiar relation of affectability," i.e., to which it is linked.

There are several dozen types of nodes, each one of which is a different Java class file; and there are dozens to millions of each kind of node in the RAM of a computer network running Webmind. In practice, as a Webmind is running, each node is a certain pattern of 0’s and 1's embodied as a pattern of electrical switches in a computer’s random access memory. This pattern of 0’s and 1’s sometimes gets hold of the CPU – when other nodes give it control – and then it carries out its methods (also encoded as patterns of 0’s and 1’s) that transform it, and transform other nodes (other patterns of 0’s and 1’s).

All computer programs are dynamical systems of 0’s and 1’s in RAM, which use the CPU to evolve themselves. Webmind is different in that it is a self-organizing system with a special structure to it, designed to lead to the emergence of useful, intelligent patterns of 0’s and 1’s. It divides RAM into regions corresponding to nodes, and has an organized scheme by which CPU control is passed among nodes, and by which nodes can affect other related nodes when they get CPU.

The basic logic is the same whether there is one CPU or 20, and whether all the CPU’s live on the same machine, or they are all on different machines. CPU’s on the same machine can service nodes in the same memory; CPU’s on different machines service nodes in different physical memory units. But, a node in one machine’s memory can still have a link to a node in another machine’s memory. The action on the remote node is slower, because a message has to be sent from one machine's Java Virtual Machine to the other's, but this is handled by Java and the Webmind infrastructure, it’s transparent to the node itself. This would be doable in C++, but it would be nasty: in Java it's relatively simple.

This is the beauty of Java: it allows the program to operate according to its own logic, across multiple machines and networks, without worrying too much about the specific hardware infrastructure underlying it. Java is not necessary for Webmind as an intelligent system -- one could program a Webmind in C or assembly language or whatever. But something like Java is necessary for Webmind or any other system to truly grow into a global brain. Programs must be freed from the constraints of hardware, and allowed to extend themselves freely through global computing space. This is what Java permits.

Having a programming language that allows programs to run on any machine gets you halfway to the needed infrastructure for the global brain. The other half is simple, fast, secure "network communications" so that different machines can talk to each other easily. Java supplies this capability as well. Webmind lobes can live on different machines and communicate with each other almost as if they lived on the same machine. Many machines, one mind. There are lots of nasty technical details involved, but they are surmountable. The network is the mind.

The Network is the Computer is the Mind.

The emergence of Java is coincident with the rise of networks as the fundamental structure in computing. "The Network is the Computer," a recent Sun Microsystems marketing slogan, is surprisingly accurate as slogans go: the network should be, and increasingly really is the computer. What is now inside the PC -- memory, processing, information -- will in the network computing environment be stored all over the place. The overall computation process is distributed rather than centralized, even for basic operations like word processing or spreadsheeting. Java, Sun correctly envisions, is a perfect tool for enabling this evolution.

The emergence of networks will change the face of commercial and personal computing. It's a familiar, lovely tale. First there were mainframes and terminals, then PC's and local-area networks ... and now large-scale, integrated network computing environments, providing the benefits of both mainframes and PC's and new benefits besides. Applications can live on the PC or else, they can live on the server, and the PC or NC "Network Computer" terminal can access them as needed. In five years from now, most software will probably sit on Internet Service Providers' machines, and home and corporate users' machines will just run small programs accessing this software.

And what all this truly represents is a return to the origins of computer science, when the emphasis was on neural nets and parallel architecture. It goes a long way toward correcting the fundamental error committed in the 1940's and 1950's, when the world decided to go with a serial, von-Neumann style computer architecture, to the almost total exclusion of more parallel, distributed, brain-like architectures.

Mainframes and PC's mesh naturally with the symbolic, logic-based approach to intelligence. Network computing environments, on the other hand, mesh with a view of the mind as intercommunicating, intercreating processes. The important point is that the latter view of intelligence is the correct one. From computing frameworks supporting simplistic and fundamentally inadequate models of intelligence, one is suddenly moving to a computing framework supporting the real structures and dynamics of mind.

The mind is far more like a network computing system than like a mainframe-based or PC-based system. It is not based on a central system that services dumb peripheral client systems, nor it is based on a huge host of small, independent, barely communicating systems. Instead it is a large, heterogeneous collection of systems, some of which service smart peripheral systems, all of which are intensely involved in inter-communication. In short, by moving to a network computing framework, we are automatically supplying our computer systems with many elements of the structure and dynamics of mind.

This does not mean that network computer systems will necessarily be intelligent. But it suggests that they will inherently be more intelligent than their mainframe-based or PC counterparts. And it suggests that researchers and developers concerned with implementing AI systems will do far better if they work with the network computing environment in mind. The network computing environment, supplied with an appropriate operating system, can do half their job for them -- allowing them to focus on the other half, which is inter-networking intelligent agents in such a way as to give rise to the large-scale emergent structures of mind.

If our goal is to develop true intelligence, the von Neumann architecture of non-networked mainframe and PC computers is simply too limiting. It is simply too far from the basic logic of how mind works. The network computing approach, although still simplistic compared to the structure of the human brain, is fundamentally correct according to principles of cybernetics and cognitive science. As we move toward a worldwide network computing environment, we are automatically moving toward computational intelligence, merely by virtue of the structure of our computer systems, of the logic by which our computer systems exchange and process information. In other words, not only is the network is the computer, but it is the mind as well. This leads to a new, improved slogan: The network is the computer is the mind.