Lecture 6 Quotes

Representation, Universality and Causality

Turing's theorem and Goedel's theorem combine to a funny picture. Although we normally think that a symbol is related to its referent by a kind of causal connection, sufficiently complex computations enable a kind of self-reference that makes causality in an abstract, higher-level system fundamentally different to causality in the lower-level system. This intuition is important to cultivate in thinking about the interplay of biology, psychology and phenomenology.

Changing the rules

A commonplace but counterintuitive picture that we want to think clearly about:

When we humans think, we certainly do change our own mental rules, and we change the rules that change the rules, and on and on - but these are, so to speak, "software rules". However, the rules at bottom do not change. Neurons run in the same simple way the whole time. You can't "think" your neurons into running some nonneural way, although you can make your mind change style or subject of thought.

Getting to the bottom of it:

If it were possible to schematize this whole image, there would be a gigantic forest of symbols linked to each other by tangly lines like vines in a tropical jungle - this would be the top level, the Tangled Hierarchy where the thoughts really flow back and forth. This is the elusive level of mind: the analogue to LH and RH [the left and right hands that draw each other in Escher's famous lithograph]. Far below in the schematic picture, analogous to the invisible "prime mover" Escher, there would be a representation of the myriad neurons - the "inviolate substrate" which lets the tangle above it come into being. Interestingly, this other level is itself a tangle in a literal sense - billions of cells and hundreds of billions of axons, joining them all together....

Getting back to the symbol tangle, if we look only at it, and forget the neural tangle, then we seem to see a self-programmed object - in just the same way as we seem to see a self-drawn picture if we look at Drawing Hands and somehow fall for the illusion, by forgetting the existence of Escher. For the picture, this is unlikely - but for humans and the way they look at their minds, this is usually what happens. We feel self-programmed… Our thoughts seem to run about in their own space, creating new thoughts and modifying old ones…

An analogous double-entendre can happen with LISP programs that are designed to reach in and change their own structure. If you look at them on the LISP level, you will say that they change themselves; but if you shift levels, and think of LISP programs as data to the LISP interpreter…, then in fact the sole program that is runnin gis the interpreter, and the changes being made are merely changes in pieces of data.

Limitology

Turing's theorem (like Goedel's) says that there are some problems for which it is in principle impossible to describe precise solutions. This has consequences for our view of ourselves, or for cognitive science, or for both.

I think that the process of coming to understand Goedel's proof, with its construction involving arbitrary codes, complex isomorphisms, high and low levels of interpretation, and the capacity for self-mirroring, may inject some rich undercurrents and flavors into one's set of images about symbols and symbol-processing, which may deepen one's intuition for the relationship between mental structures on different levels.

In one sense,

To seek self-knowledge is to embark on a journy which will always be incomplete, cannot be charted on any map, will never halt, cannot be described.

But this is more of a personal meaning than a scientific one:

I see no Goedelian obstacle in the way of the eventual understanding of our minds. For instance, it seems to me quite reasonable to desire to understand the working principles of brains in generall, much the same way as we understand the working principles of car engines in general. It is quite different from trying to understand any single brain in every last detail - let alone trying to do this for one's own brain! I don't see how Goedel's Theorem, even if construed in the sloppiest way, has anything to say about the feasibility of this prospect.

And it's a personal meaning the robots will share:

Artificial Intelligence, when it reaches the level of human intelligence - or even if it surpasses it - will still be plagued by the problems of art, beauty, and simplicity, and will run up against these things constantly in its own search for knowledge and understanding.

Visualizing the relationship between low level and high level in the brain

Perception and phenomenology:

There is a famous breach between two languages of discourse: the subjective language and the objective language. For instance, the "subjective" sensation of redness, and the "objective" wavelength of red light. To many people, these seem to be forever irreconcilable. I don't think so. No more than the two views of Escher's Drawing Hands are irreconcilable - from "in the system", where the hands draw each other, and from outside, where Escher draws it all. The subjective feeling of redness comes from the vortex of self-perception in the brain; the objective wavelength is how you see things when you step back, outside of the system. Though no one of us will ever be able to step back far enough to see the "big picture", we shouldn't forget that it exists. We should remember that physical law is what makes it all happen - way, way down in the neural nooks and crannies which are too remote for us to reach with our high-level introspective probes.

Action and phenomenology:

So now we make a modification in our [decision-making] robot: we allow its symbols - including its self-symbol - to affect the decision that is taken…

Now if some outside agent suggests 'L' as the next choice to the robot, the suggestion will be picked up and channeled into the swirling mass of interacting symbols. There, it will be sucked inexorably into interaction with the self-symbol, like a rowboat being pulled into a whirlpool. That is the vortex of the system, where all levels cross. Here, the 'L' encounters a Tangled Hierarchy of symbols and is passed up and down the levels. The self-symbol is incapable of monitoring all its internal processes, and so when the actual decision emerges - 'L' or 'R' or something outside the system - the system will not be able to say where it came from. Unlike a standard chess program, which does not monitor itself and consequently has no ideas about where its moves come from, this program does monitor itself and does have ideas about its ideas - but it cannot monitor its own processes in complete detail, and therefore has a sort of intuitive sense of its workings, without full understanding. From this balance between self-knowledge and self-ignorance comes the feeling of free will.

Some theoretical perspective:

[This] implies that a reductionistic explanation of a mind, in order to be comprehensible, must bring in "soft" concepts such as levels, mappings, and meanings… This act of translation from low-level physical hardware to high-level psychological software is analogous to the translation of number-theoretical statements into metamathematical statements. Recall that the level-crossing which takes place at this exact translation point is what creates Goedel's incompleteness and the self-proving character of Henkin's sentence. I postulate that a similar level-crossing is what creates our nearly unanalyzable feelings of self.

In order to deal with the full richness of the brain/mind system, we will have to be able to slip between levels comfortably. Moreover, we will have to admit various types of "causality": ways in which an event at one level of description can "cause" events at other levels to happen. Sometimes event A will be said to "cause" event B simply for the reason that the one is a translation, on another level of description, of the other. Sometimes "cause" will have its usual meaning: physical causality. Both types of causality - and perhaps some more - will have to be admitted in any explanation of mind, for we will have to admit causes that propagate both upwards and downwards in the Tangled Hierarchy of mentality…

Human uniqueness

What's wrong with the argument that there is a regress in the idea of machines reasoning by applying rules?

It is obviously the assumption that a machine cannot do anything without having a rule telling it to do so. In fact, machines get around the Tortoise's silly objections as easily as people do, and moreover for exactly the same reason: both machines and people are made of hardware that runs all by itself, according to the laws of physics. There is no need to rely on "rules that permit you to apply the rules", because the lowest-level rules - those without any "meta"'s in front - are embedded in the hardware, and they run without permission.

On to Samuel's argument. Samuel's point, if I may caricature it, is this: No computer ever "wants" to do anything, because it was programmed by someone else. Only if it could program itself from zero on up - an absurdity - would it have its own sense of desire. In his argument, Samuel reconstructs the Tortoise's position, replacing "to reason" by "to want". He implies that behind any mechanization of desire, there has to be either an infinite regress or worse, a closed loop.

[But] you aren't a "self-programmed object" (whatever that would be), but you still do have a sense of desires, and it springs from the physical substrate of your mentality. Likewise, machines may someday have wills despite the fact that no magic program spontaneously appears in memory from out of nowhere (a "self-programmed program"). They will have wills for much the same reason as you do - by reason of organization and structure on many levels of hardware and software.

An aside: the strange loops of communication and the pragmatics of coordinated agency

Art and representation depends on conventions. But those conventions can be "flouted" to achieve effects. This is well known in linguistics but is just as much a perceptual phenomenon - if you think of art.

Magritte's series of pipe paintings is fascinating and perplexing. Consider The Two Mysteries. Focusing on the inner painting, you get the message that symbols and pipes are different. Then your glance moves uward to the "real" pipe floating in the air - you perceive that it is real, while the other one is just a symbol. But that is of course totally wrong: both of them are on the same flat surface before your eyes. The idea that one pipe is in a twice-nested painting, and therefore somehow "less real" than the other pipe, is a complete fallacy. Once you are willing to "enter the room", you have already been tricked: you've fallen for an image as reality. To be consistent in your gullibility, you should happily go one level further down, and confuse image-within-image with reality. The only way not to be sucked in is to see both pipes merely as colored smudges on a surface a few inches in front of your nose. Then, and only then, do you appreciate the full meaning of the written message "Ceci n'est pas une pipe" - but ironically, at the very instant everything turns to smudges, the writing too turns to smudges, thereby losing its meaning! In other words, at that instant, the verbal message of the painting self-destructs in a most Goedelian way.

This flouting of the rules to achieve effects is particularly associated with conceptual art, where again Hofstadter is particularly interested in works whose message self-destructs: works that break the rules to communicate that there are no rules.

Any time an object is exhibited in a gallery or dubbed a "work", it acquires an aura of deep inner significance - no matter how much the viewer has been warned not to look for meaning. In fact, there is a backfiring effect wherebty the more that viewers are told to look at these objects without mystification, the more mystified the viewers get. After all, if a wooden crate on a museum floor is just a wooden crate on a museum floor, then why doesn't the janitor haul it out back and throw it in the garbage? Why is the name of an artist attached to it?