Open World Artificial Life Research
ccr breaks with most artificial life systems in a major way by
presuming an open world, both in that humans are expected to
interact with the system while it is running, and in that the
underlying physical implementation---the computers and communications
links---is expected to change dynamically. Three consequences of this
approach are: Research strategies change because global repeatability
is sacrificed, humans must be enticed to use the system, and the
tradeoffs between security, efficiency, and power must be addressed,
not only early and fundamentally, but continually and at many levels.
Repeatability is unscalable
A great virtue of closed-world alife models is that (at least in
principle) every detail is determined by the researcher, so
every observation can be reproduced, and utterly controlled variations
can be tested. The price is that comparatively small worlds must be
studied. With ingenuity, much has been and remains to be learned from
such worldlets, and the possible richness of high-isolation worlds
grows with improvements in affordable individual computers, but of
course single-owner systems cannot compete with large collaborative
networks of systems. The irony is that as computer systems become
more like living systems---more complex and articulated, more robust
and interconnected---they become less suited to closed-world
artificial life research. Ray's NetTierra proposal
offers one compromise position, in which repeatability is explicitly
sacrificed but isolation is mostly preserved.
Humans are evolutionary forces
ccr approaches the dilemma another way, trading the isolated
clarity of lab work for the symbiotic relevance of field work.
ccr proposes to be a strictly non-proprietary
experimental platform, controlled by and for its research users,
providing a venue within which software for computation and
communication tasks---``agents,'' ``brokers,'' ``robots,'' etc.---can
be created, copied, hybridized, allowed to cooperate and compete for
``market share'' and perhaps win through into the base ccr
standards and protocols.
As in typical alife systems, the overall ``fitness functions'' are
supplied by humans; on the other hand, in ccr humans are also a
source of novelty and change, conflicting with the Darwinian principle
of blind variation. Given the nature of complex adaptive systems (Holland, 1995), and the gap between the manifest
intentionality of the individual human action and the
unintentional effects that often ensue, it is an empirical question
just how un-Darwinian the evolution of a substantial ccr
universe would actually be.
Security is biology
Communication entails risk (Ackley & Littman,
1994): A message sender necessarily reveals information in the
act, and a message receiver is necessarily impacted in some way by the
act, or else no communication has occurred. The dual tasks---of
revealing some information while hiding some, and of allowing selected
influences while rejecting others---are fundamental to the structure
and function of living systems, from cell walls to immune systems,
crucial to the very notions of self and independent existence.
A price exacted by the oft-touted mobility of a ``software agent'' is
that it doesn't control the physical hardware that embodies it, so to
protect its integrity it must either hide from or ally with the
machine owner. Computer viruses take the former route; ccr
takes the latter, committing to reveal to the owner the tradeoffs
between safety and power as obviously and intuitively as possible, and
placing its source code on the table as bona fides. In turn, in
a general release of ccr, the hardware owners would commit to
playing within the system (which includes researching attacks
to devise defenses; in that spirit ccr 0.1 provides an
``award'' system to honor and memorialize the publicizers and fixers
of holes), and would place their digital signatures on the table
co-signed by existing ccr users (the bottom-up ``web of trust''
approach) and/or an appropriate external certification
authority. Various flavors of anonymity can be
created within the system, but only built upon a base of identified
owners, shifting risk from loss of integrity to loss of anonymity. It
is an open question whether special-purpose protected hardware could
in principle be ``owned'' by the system itself, which could allow the
construction of robust ``public spaces'' with known and reliable rules
of behavior, inside of independently-owned computers---and if
so, under what circumstances would wise owners choose to incorporate
While technologies such as digital signatures are
necessary, security is much more than a purely technical issue, and so
the purely technical power of the system must have corresponding
limitations. In ccr, as in most high-isolation alife
systems (e.g., Ray, 1991; Dewdney, 1984) as well as emerging commercial
systems such as Java, the basic approach to limiting
communication risk via limiting power is to control the semantics of
the language in which communications are expressed. Though
Java significantly improves security from the ``language on down'', as
a general purpose programming language, its approach to trust is at
the level of the program and is largely boolean---you either grant a
disturbing amount of power to an incoming Java `applet' or you don't
run it at all. As a research system, ccr sacrifices some speed
and generality to gain fine-grained access control at multiple points
including each function invocation, directly supporting intuitive and
ccr-specific degrees and modes of trust and risk.
Ackley, D.H., & Littman, M.L. (1994)
- Altruism in the evolution
of communication. In Artificial Life IV: Proceedings of the
Fourth International Workshop on the Synthesis and Simulation of
Living Systems. Edited by R. A. Brooks & P. Maes. A Bradford
Book, The MIT Press: Cambridge, MA.
Ackley, D. H. (1996, to appear).
- ccr: A Network of Worlds for
Research. In Artificial Life V, MIT Press.
Dewdney, A.K. (1984, May)
- In the
game called Core War hostile programs engage in the battle of
bits. Scientific American. See also online,
Gosling, J. and McGilton, H. (1995, May).
- The Java Language Environment:
A White Paper., Sun Microsystems Computer Company. At the Java site.
Holland, J. H. (1995)
- Hidden Order: How Adaptation Builds
Complexity. Addison-Wesley: Reading, MA.
Ray, T. S. (1995)
- A proposal to create a network-wide biodiversity
reserve for digital organisms. ATR Technical Report
TR-H-133. Was available online
but now (July 4, 1996) seems to have been withdrawn.
Ray, T. S. (1991)
- An approach to the synthesis of life. In
Artificial Life II, SFI Studies in the Sciences of Complexity, vol.
X, edited by C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen,