Living Computation
Manifesto
Managing distributed computations across large networks of separately
administered resources is in important ways more akin to managing a
human society than to marshalling the closely-held resources of a
single digital computer. The Von Neumann architecture simply will not
extend to an effective basis for large-scale distributed computing.
Viewing living systems, especially living ecosystems, as computational
systems, provides many insights into what the successful architecture
will look like. In many ways the object-oriented programming approach
is an important step in the right direction, but at the same time,
fundamental issues---most quite obvious in the context of living
systems---remain largely unrecognized.
Sense of self
Consider, for example: Relatively ``complex'' living organisms such
as, say, mammals, are always ``designed, built, and tested''
on a whole system basis---there are never any ``plug-ins,''
``patches,'' or ``upgrades'' to an individual's genetic code (at
least, not until very recently). The mechanisms for storing
and transmitting the germ line code defend against external
alterations in many ways---physical, chemical, biochemical,
developmental, immunological, instinctive, and cultural. Such
elaborate and expensive defenses may be explained by the relatively
high cost of producing a system. If the results could be easily
``hijacked,'' the capital investment would be unwarranted.
On the other hand, relatively ``simple'' living organisms such as
bacteria are capable of incorporating ``stray'' bits of external code
from their environment into their ``operating systems''---as when a
gene coding for drug resistance is observed to ``jump species''---to
the dismay, at least, of the complex organisms that produce the drug.
Such promiscuity may be explained by the relatively low cost of
producing a system combined with the potential gains to be had by
``stealing code.''
Is a computer system more like a collie or an E. Coli? On
the one hand, even a personal computer is an expensive investment, and
if it is used productively its value rises much higher than its
capital cost. On the other hand, personal computers today are a
motley patchwork of code from dozens of sources, with essentially no
``sense of self'' (for an exception, see, e.g., Forrest, 1996), and only the most rudimentary
and shallow of defenses.
In the long run, we can be quite confident that this embarrassing
combination of traits will not persist---in the language of
evolutionary biology, it is not an ``evolutionarily stable strategy.''
(Maynard Smith, 1982). The essentially
immediate explosion of computer viruses following the rise of personal
computers is testament to this. Viewed in evolutionary terms, it
makes no more sense to rail at the immorality of the virus writers
that exploited the situation than it does to blame the system
developers that created it. In both cases the protagonists were
simply exploiting an opportunity that existed in the environment of
the time. Personal computer manufacturers could cut costs by
eliminating even the most basic immunologic mechanisms such as
protected kernel mode and rationalize the decision by observing that
computer viruses were not currently much of a problem. At the time,
it was plausible to claim that these are ``personal'' computers that
won't be networked like more expensive systems. On the flip side,
virus writers and other attackers immediately exploited the results,
rationalizing their behavior, where they felt the need to, by
observing that they were just revealing obvious design problems that
could easily be exploited for gain, and which the cheap-skate
manufacturers are papering over about rather than fixing.
Today, the appellation ``personal computer'' is in important ways a
misnomer. Although individually owned, the personal computer does not
``know'' its owner in any significant way, and that's just as well
because it is fundamentally unable to distinguish between what is
``inside itself'' and to be trusted with sensitive information (merely
beginning with passwords) and what is not. If future personal
computers do not make a strong and credible case for loyalty
to their owners, all the graphical interface and ease-of-use
improvements in the world will not get people to use them for serious
work. On the other hand, if a system demonstrates that is manifestly
watching out for its owner's best interests, first, last, and always,
from hardware to software to data and communications, people would be
willing to clap rocks together in Morse code to interact with it.
References
-
Ackley, D. H. (1996, to appear).
- ccr: A Network of Worlds for
Research. In Artificial Life V, MIT Press.
-
Forrest, S., Hofmeyr, S.A., Somayaji, A., and Longstaff, T.A. (1996, in
press).
- A sense of self for Unix processes. In the 1996 IEEE Symposium on
Computer Security and Privacy.
-
Maynard Smith, J. (1982).
- Evolution and the theory of games.
Cambridge University Press: Cambridge.