Towards the Development of an Einstein/Turing Machine:
I. The Turing Machine:
The Turing Machine is a fundamental conceptual model for the
Computational Process. In Principle, it is capable of Simulating any
process that can be described in terms of a Computational Model.
Basically, the Turing Machine comprises two main components:
- A Memory (environment) which is processed
- A (three part) Process which modifies the contents of the memory/environment:
- Read
Rule
(determining the position of the "Read" Pointer)
- Substitution Rule (determining the nature of the modification)
- Write
Rule
(determining the position of the "Write" Pointer)
In principle, the "Process" is continued until it is terminated (for
some external or internal reason) . Originally, the Turing machine was
seen as a model for the process of Problem Solving -and so the
"Computability Problem" (deciding if a specific "Problem" could be
solved within a finite number of steps" was an important issue. Later,
as Computational Models were increasingly applied to non-terminal
processes (such as controlling a continuously operating electricity
generating plant) the Computability Problem became a more specialised
concern.
II. Understanding the Turing Machine:
1.0 Simulation and Understanding:
A. Including the Infinite:
Clearly, there is an element of
tautology in the description of the Turing Machine as a "Universal
Simulation Machine" -simply because it is assumed to be capable of
"Simulating" any processes that can be "Described" in terms of a Turing
Machine (which is assumed to be a synonym for "Computational Model").
To a certain extent, this tautology could be resolved through the
"Computability Problem" -which eliminated processes that could not be
terminated (because the "problem" was never solved). However, the price
for this was the (conceptual) elimination of all non-terminal processes.
So are there other (more suitable) criteria which can specify the limits (and value) of the simulation?
B. Simulation and Marketing:
Postmodernism has apparently
fallen in love with "Simulacra" -which (conform with its consumerist
bias) it loves to equate with that which is being simuulated.
Apparently, postmodernists have never really understood that although
one might gain nourishment from eating an apple -one cannot remain
healthy on a diet of "Virtual Reality" apples.
Consumerism requires that "Simulacra" can be marketed as if they were
real -and postmodernism seems to be the philosophy which is principally
designed to market this somewhat dangerous concept. Within this
perspective, objects are alienated from their functional context -and
have no other function other than to be marketed as (basically
meaningless) status symbols intended to improve the (presumably low)
self-esteem of the purchaser.
C. Simulation and Description:
Outside the marketing
perspective, Simulacra are less useful as objects in themselves -but
gain their meaning through a (usually pedagogical) context.
Within the context of the
Turing machine one can perhaps see that (with the Computability Problem
eliminated) the question of "Describability" becomes the key factor
determining if a process can be simulated or not.
D. The Value of Simulations (Models)
i. Clarifying the Concept:
One can perhaps argue that the value of a Computational Model
lies less in its practical application than in the insights gained in
its creation -simply because of the need to clarify these concepts in
terms of an explicit (and testable) description.
ii. Experiencing the External:
In many cases it is
difficult to imagine situations from a perspective other than our own.
Traditionally "Art" has been a way of simulating (through visual
imagry, play acting and story telling) the human condition from various
viewpoints which are not usually available to us.
iii. Experienceing the Internal (Self Reflection):
As the Bible remarks -it is often easier to see the plank in
someone else's eye than it is to see the splinter in one's own. Indeed,
in many cases our own (internal) functioning is at least as mysterious
(and sometimes even more) to ourselves as it is to others. Sometimes we
need a form of externalising "mirror" in order to see ourselves more
clearly.
iv. Simulating the Impossible:
Some forms of knowledge involve situations that are difficult or
even impossible for us to experience directly. These situations may be
too big or too small, too distant, too expensive or too dangerous.
However, they can still be studied if they can be simulated in some
form or other (not neccessarily in hi-fi "sensory" terms). In many
cases, A mathematical Formula, Classical Greek drama -or Indonesian
shadowpuppets can simulate a situation just as effectively as modern
cinema or expensive digital "virtual reality" systems can.
2.0 Einstein and Turing:
In the "Newtonian" view of space, there is no (theoretical) interaction
between the (environmnetal) space and the objects that "move" through
it ("Friction" can be seen as the influence of the "environment" on the
moving object -but this is not a "reciprocal" action which involves
bi-directional interaction between the object and its environment).
In the Einsteinian view of space, objects moving through space are not
only influenced by the environmental space through which they move -in
turn, they also modify it.
Clearly, the Turing machine modifies the environmental memory space
through which the "Read" and "Write" pointers move. The main difference
between a Turing Machine and an Einsteinian Time/Space Machine would
therefore appear to be concerned with the number of Dimensions of space
involved. A Turing machine is generally assumed to be operating in a
one-dimensional space -while the dimensions of Time/Space are
unspecified (but presumably generally assumed to be larger than one).
3.0 Organic and Inorganic
A. The Inorganic Machine:
There seems to be a common tendency to concider "Einsteinian
Time/Space" as something extraordinary which does not exist in the more
mundane universe of daily (earth-bound) experience.
However, if we concider the difference between "Newtonian" and
"Einsteinian" space to be based on the absence or presence of
interaction between the objects involved and the environmnetal space
through which they travel -then it becomes easier to concider Newtonian
space as being "Mechanical" and Einstineinian/Turing space as being
"Organic".
Initially, this seems counter-intuitive -because our experience of
artificiality (in particular as manifest by the machines produced in
the industrial revolution) has, up until now, always been limited to
experience with Newtonian machines -and so we assume that the concept
"Machine" is always synonymous with the concept "Inorganic".
On the other hand, we do seem to associate "organic" intuitively with
"Interaction" (or feedback): While the interaction with the environment
exhibited by, for example, an (inorganic) fence seems to be limited to decay
-however, all the organisms living in the area of the fence will
respond in a variety of ways. Contrary to the behaviour of the fence, a
row of (organic) trees will be in constant interaction with the
environment (both influencing it and being influenced by it).
B. Homogenity and Diversity:
The second law of thermo-dynamics claims that in a system
exhibiting different energy levels, the energy will gradually disipate
throughout the system until it becomes homogenous. This is known as
"Entropy".
"Decay" is a natural condition of all inorganic systems. But all
systems are not "inorganic" -so we should be carefull in applying the
laws of physics (which esentially deals with inorganic systems) to
"organic" systems.
In practice, it seems that "organic" systems have an anti-entropic
tendency -which involves taking (diffuse) energy and dissipated
nutrients and converting them into differnentiated organisms. It also
seems that "organic" systems are not stable -that in general they
exhibit both an "organic" (living, anti-entropic) phase and an
"inorganic" (dead, entropic) phase.
C. Chaos Theory:
Information Theory also uses the concept of entropy -and defines
it (contrary to physics) as the degree of "uncertainty" resolved by
recieving the message. So in Information theory -"diversity"
(unpredictability) increases entropy while in physics "diversity"
(differentiation) reduces it.
In practice, it is generally so that the longer a (repetitive)
"mechanical" system operates -the less uncertainty there is as to which
transformation will come next. The parallel with physics seems to hold
good. However, the behaviour of a living organism is generally less
predictable -so perhaps differnt rules apply to "organic" systems here
too.
Chaos Theory claims that some systems do not degenerate into simple
repetition -but continue to generate "uncertainty" as to their next
state. Is "Chaos Theory" a parallel "Information Theory" for
"organic" (Einsteinian Time/Space) systems?
D. Decay and Abundence:
The distinction between
"organic" and "inorganic" might also have implications for economics.
It seems that politicians generally refer to economic systems as
if they were "inorganic" systems (subject to decay) -as if income was a
static "pie" that can only be divided and distributed once. The
"economics" of scarcity often operates as the "stick" used to "beat"
the public into cooperating in a system of economic exploitation.
However, in practice, it appears that (as the economist Keynes
pointed out) organic systems can increase (as well as decrease) and so
the income is not fixed but dependant on how it is re-invested in the
system. If one has a couple of cockroaches hiding under one's fridge
-then one soon discovers that sometime the problem with organic systems
is not their tendency to decay and become scarce -but their tendency to
reproduce and multiply themselves into abundance. Perhaps, in an
economy based on "organic" systems -an economy of abundance would be
more appropriate.
4.0 Linear and Non-Linear Systems:
In Euclidian/Newtownian space the
characterisitcs of the space involved does change over time -wheras in
time/space the state of the space itself changes over time and this in
turn can affect the way objects behave as they pass through the space.
The time/space becomes both modified and modifier.
If the main difference between Euclidian/Newtownian space and
Einsteinian/Turing space involves an interactive feedback between the
space and the process or object that "navigates" that space -then a
change in the conceptual nature of space must have some implications
for our traditional (mechanistic) view of cause and effect which
generaly sees a rigid conceptual distinction between "that which is
moved" and "that which does the moving".
If cause and effect become intertwined (and constantly fed back into
each other) then the outcome is not so easilly predicted. Just as an
interaction between observer and observed -an interaction between cause
and effect does not deny the importance of the principle of "causality"
-but it does make its outcome less obvious.
A simple "linear" extrapolation is not so reliable in time/space as it
is in Euclidean space -although even there, there may be an infinite
number of curves that connect any set of specified points.
5.0 Computational Space:
(Multi-Dimensional) Space Thinking:
(The periodic Table of Elements)
The "computability" problem perhaps lies at the heart of the difference
between "artistic' and "scientific" problem solving..... because
"scientific" questions are supposed to be decidable in terms of 'true"
or "false" -while artisitc problems are generally more ambigous as to
both their interpretation and their truth value -and may
not be interested at all in the question of "proof"....
6.0 Mapping Spaces:
A separation of the "read" and "write" procedures allows the concideration of "mapping"
between different spaces by "reading"in one (conceptual) space and "writing" in another.
7.0 Conscious Intelligence:
Presumably, the process of mapping
between different conceptual space and the use of "computational
models" for prediction and control brings us into the realm of
intelligence.
If "Consciousness" is a (perhaps primitive) form of feedback based
"self-awareness" which is linked to "intelligence" -then the time/space
machine may prove to be a useful model for exploring conciouness and
intelligence.
Indeed, practical experience with the human "thinking" process suggests
that perhaps the human mind operates more in terms of complex
interactions between various (emotional and intellectual)
"force-fields" -and less on the basis of "rational (binary) logic" than
our (Classical Greek) cultural heritage seems to assume.
III. Constructing the Turing Machine (Defining the Space):
Basic Implementation Process:
-
-Space Definition (Creative Process -Epistemology -Taxonomy)
-
-Space Processing (Transformational Process -Projection, Calculus and Prediction -Rules and Patterns)
-
-Space Mapping (Interpretation Process -Converting data to Information -Ontology)
1.0 Space Definition(s):
Defining the Universe (Taxonomy):
(number of dimensions/components)
(domain/viewpoint)
2.0 Space Processing:
Transformations within Space
3.0 Space Mapping:
i. Defining Domains
- Contexts & Contents
- Self & Others
- Organism & Environment
- Inside & Outside
- Figure and Background
- You and Me
- Perception
- Cognition
- Active, Passive
- past, present, future
- natural, artifical
- known, unknown
- real, imaginary
- physical, mental
ii. Transformations between Spaces
-Changing Dimensions
-Changing Metric and Scale
-etc...
iii. The Interpretation Process (Transformations between Domains
)
- Data
- Information
- knowledge
- Understanding
- Wisdom
("spirituality", "
conciousness" and "intelligence -including the
relationships between them)
IV. Implementing the Turing Machine (Articulating Space)
Different Models of Articulation and Application:
The Sonological Process:
-Data Aquisition (Problem Definition)
-Data Reduction (Theory Forming)
-Theory
Testing (Mapping into "subjective" Space) (interpretation,
testing, creative feedback)
Other (Potential) Parameter Systems:
- Ends (Dreams) and Means (Realisations)
- Pragmatic Striving for:
-the Existing
-the Potential
-the Desirable
-the Undesirable (for others)
- Media, Methods, Meanings
- Tool, Medium, Metaphor
- Alphabet, Grammar, Interpetation
- Stomach, Mind, Language
- Space, Articulation, Mapping
- Actors, Goals, Environments
- Aims, Strategies, Logistics
Towards a Generalised Model:
- - Specifications: (Taxonomy of Alphabets,
Topology of Spaces and sub-Spaces, Connectivity of Concepts, etc.)
-
- Articulations (Process, Grammar, Calculation and Divination, etc.)
-
- Interactions (Control systems,
Interpretational systems, Ecological, and Social systems, etc.)
-
- Equilibria (Aesthetics, Entropy, Sustainability, etc.)
Trevor Batten
<trevor at tebatt.net>
July 2003
September 2005/
Feb/April 2006