While the recent inventions of Web 2.0 and User Generated Content (UGC) seem to be radical departures from the computing culture we grew up in, their organic social metaphors are in fact rooted in the beginning of computer science. In the 1940’s and 50’s work of Alan Turing, John Von Neumann and Norbert Weiner, most discussions of the future of computing evolve into a study of the brain. The natural automata of human thought, the way in which our ideas express our independence, this is the machine intelligence that technologists tried to design into early computers.
Alan Turing was fascinated by Automata and its relationship to natural human thought. In his 1950 “Computing Machinery and Intelligence,” Turing outlined an experiment that was able to determine whether a computing machine could be defined as having the capacity to think. The Turing test functions as follows: Human “X” and respondent “Y” take part in a teletype conversation, but X cannot know whether Y is human or a machine. If, after a specified amount of time, X believes that Y has responded like a human, and Y is a machine, then Y can be defined as having that human capacity of thought.
In his biography of Turing, William Aspray writes that this:
“was among the earliest investigations of the use of electronic computers for artificial-intelligence research...He attempted to break down the distinctions between human and machine intelligence and to provide a single standard of intelligence, in terms of mental behavior, upon which both machines and biological organisms could be judged. In providing his standards, he considered only the information that entered and exited the automata…Turing was moving toward a unified theory of information and information processing applicable to both the machine and the biological worlds.”
The fusion of machine and biology is promoted as a core computer architectural principle in the Interim Progress Report on the Physical Realization of an Electronic Computing Instrument: Julian H. Bigelow, James H. Pomerene, Ralph J. Slutz and Willis H. Ware; Princeton: The Institute for Advanced Study; 1 January 1947. This report was prepared for John Von Neumann, and the rest of the IAS authorities, on the development progress of a machine based entirely on mathematical equations.
Left to right: James Pomerence, Julian Bigelow, von Neumann and Herman Goldstine
Von Neumann had joined Princeton’s Institute for Advanced Study as a Mathematician in 1933. About 10 years later he started concentrating on something less theoretical and more practical (which alienated many of his colleagues): building an electronic computing machine. This project was a deep meditation on the act of creation. Some of the greatest minds, across a variety of disciplines (math, biology, engineering, physics) converged in Princeton to help Von Neumann “physically realize” his ideas.
IAS Report, 1947
According to the report, Organs are: “portions or sub-assemblies of the machine which constitute the means of accomplishing some inclusive operation or function; as “arithmetic organ.” Note how the processor in this case is able to extend its influence onto others in an “inclusive operation.” The organ of social media was anticipated already then, in 1947, even without an Internet to enable it at scale.
Von Neumann continued to extend his computer research towards an understanding of the human brain. He described this specifically in his introduction to his 1958 work The Computer and the Brain:
In 1948, Norbert Weiner, the leader of cybernetics wrote Control and Communication in the Animal and the Machine. His use of the word animal is different than Turing’s logic or Von Neumann’s brain, but he is similarly concerned with the organs of information and their ability to relay information between systems:
“It is a noteworthy fact that the human and animal nervous systems, which are known to be capable of the work of a computation system, contain elements which are ideally suited to act as relays. These elements are the so-called neurons or nerve cells... The mechanical brain does not secrete thought <as the liver does bile>, as the earlier materialists claimed, nor does it put out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy.”
Weiner, Control and Communication in the Animal and the Machine, 1947
In late 2004, the creator of del.icio.us Joshua Schachter described to me that tags were simply crystallized attention. Both terms interested me: while attention has become my chief investigation, the transparent materialism expressed by “crystallized” has also been a key focus. When you put these together, you get, in Weiner’s words, a “secretion” of passive behavioral data.
Seth Goldstein, April 2006
Just because a tag is a form of information doesn’t mean that it lacks physicality Without being matter or energy, can a tag be made of something else, something that comes closer in nature to mirror neurons? Attentrons. Remember that mirror neurons are a form of biological material. These mirror neurons fire when the subject performs an action, but also when it observes somebody else performing an action. In this latter case, the successful firing of a mirror neuron is based entirely on its ability to passively mimic the behavior of somebody else. In this quiet absence of a human impulse, attention is full.
Seth, I'm not sure that tags really serve this function but I would guess that some form of mirror neuron-like-activity may be key in making computers both more human and more capable of replacing human attention. If, as the research with Autism suggests, mirror neurons are vital to relating our personal experience to the world we see, that would address the biggest gap in machines' ability to usefully take on tasks that would otherwise need our attention.
I wonder if you saw the blurb in last week's Business Week about the "Kaburobos", robot fund managers being offered to investors by Trade Science in Japan. It's here but behind a registration wall. They look like an example of machines doing increasingly sophisticated work without our attention. I'll be interested to see how that goes.
Back to the mirror neurons briefly, it may be that the very mechanism that allows us, and potentially machines, to understand the world we observe, is also the mechanism that requires us to relate to it emotionally, perhaps addressing the great fear of the pitiless, powerful automaton.
Posted by: Gordon Jackson | Friday, August 11, 2006 at 12:02 PM