Clark On Philosophy And Ai

Clark's tour of philosophy and AI

The problem

The radical success of common sense psychology (Fodor 1987):

If you want to know where my physical body will be next Thursday, mechanics - our best science of middle-sized objects after all, and reputed to be pretty good in its field - is no use to you at all. Far the best way is to find out (usually in practice, the only way to find out) is: ask me!

Clark's gloss:

All this makes commonsense psychology look like a theory about the invisible, but causally potent, roots of intelligent behavior. What, then, can be making the theory true (assuming that it is)? What is a belief (or a hope, or a fear) such that it can cause a human being (or perhaps a cat, dog, etc.) to act in an appropriate way? …The goal is a fully materialistic story in which mindware emerges as nothing but the playing out of ordinary physical states and processes in the familiar physical world.

There are critics:

Churchland holds [common sense psychology ] to be superficial, distortive, and false both in spirit and in detail… He believes, like Fodor, that folk psychology requires a very specific kind of "scientific vindication" - one that effectively requires the discover of inner items that share the contents and structurs of folk psychological apparatus. But Churchland, influenced by neuroscience and alternative forms of computational models, thinks such an outcome unlikely in the extreme.

There are a bunch of ways in which the debate can only be resolved by understanding more about the brain, theories of the brain and the relationship between brain, theory and everyday talk. Clark's chapters do a good job of explaining why the philosophical distinctions need to engage with the computer and the mind - and how that engagement will help us to understand the systems we build (and maybe build better ones).

Computation and its relevance

We thus confront a quite marvelous confluence of ideas. Turing's work clearly suggested the notion of a physical machine whose syntax-following properties would enable it to solve any well-specified problem. Set alongside the earlier work on logics and formal systems, this amounted to nothing less than "the emergence of a new level of analysis, independent of physics yet mechanistic in spirit … a science of structure and function divorced from material substance" (Pylyshyn 1986)

Why treat thought as computation? The principal reason (apart from the fact that it seems to work!) is that thinkers are physical devices whose behavior patters are reason respecting. Thinkers act in ways that are usefully understood as sensitively guided by reasons, ideas, and beliefs. Electronic computing devices show us one way in which this strange "dual profile" (of physical substance and reason-respecting behavior) can actually come about.

Concrete examples from AI help flesh this out

Intelligence resides at, or close to, the level of deliberative thought. It consists in the retrieval of symbolically stored information and its use in processes of search. Such processes involve the generation, composition and transformation of symbolic structures until the specified conditions for a solution are met. And it works, kind of.

They are also a foil for other possibilities

Like the intentional stance:

Whenever we understand, predict, or explain the behavior of some object by talking about it as believing x, desiring y, and so on, we are, in Dennett's phrase, adopting an "intentional stance". We are treating the system as if it were making intelligent choices in line with its beliefs, desires, and needs. What the intentional stance adds to an ordinary design-oriented perspective is the idea that the target system is not just well designed but rational - in receipt of information and capable of directing its actions in ways likely to yield successful behaviors and the satisfaction of its needs.

Mentalistic discourse, as Dennett repeatedly insists, picks out real threads in the fabric of causation. We need not, however, think that such threads must show up as neat items in an inner neural economy. Instead, we may treat the mentalistic attributions as names for scattered causes that operate via a complex web of states distributed throughout the brain (and perhaps, the body and the world).

The need to use science to sharpen our account of natural computation and thought

The bare explanatory schema, in which semantic patterns emerge from an underlying syntactic, computational organization, covers a staggeringly wide range of cases. The range includes, for example, standard AI approaches involving symbols and rules, "connectionist" approaches that mimic something of the behavior of neural assemblies, and even Heath Robinsonesque devices involving liquids, pulleys and analog computations.

The problem of subroutines:

To make matters worse, a variety of different computational stories may be told about one and the same physical device. Depending on the grain of analysis used, a single device may be depicted as carrying out a complex parallel search or as serially transforming an input x into an output y. Clearly, what grain we choose will be determined by what questions we hope to answer.

The bag of tricks:

It is probably true that at least some psychological states will be multiply realizable. That is to say, several different hardware and software organizations will be capable of supporting the same mental states… On the negative side, however, it is equally that we will discover a good model of the formal model of human thought if we proceed in a neurophysiological vacuum. For example, human memory seems to involve multiple psychologically and neurophysioligcally distinct systems. Much of the relevant evidence comes not from normal, daily behavior but from studies of brain damage and brain abnormalities. The point about multiple memory systems may be carried a step further by considering the more general idea of multiple cognitive systems. No single, central, logical representation of the world need link perception and action - the representation of the world is the pattern of relationships between all its partial representations.

The connectionist soup:

It is one thing to insist that my belief that it is raining must be a genuine cause and to insist, as Fodor seems to do, that there be a neat, well-individuated inner item that corresponds to it. Scattered causation occurs when a number of physically distinct influences are usefully grouped together (as in the notion of an economic depression) and are treated as a unifed force for some explanatory purpose. The image fits nicely with recent work on connectionism, collective effects, emergence and dynamic systems.

The flexibility of computation points out ways our intuitions are undetermined

…we must be careful to distinguish the question of whether such and such a program constitutes a good model of human intelligence from the question of whether the program (when up and running) displays some kind of real, but potentially nonhuman form of intelligence and understanding. …we have only a few options. We could insist that all real thinkers must solve problems using exactly the same kinds of computational strategies as human brains. We could hope for some future scientific understanding of the fundamentals of cognition that would allow us to recognize the shape of alternative, but genuine, ways in which various computational organizations might support cognition. Or we could look to the gross behavior of the systems in question, insisting, for example, on a broad and flexible range of responses to a multiplicity of environmental demands and situations.

The Chinese room and what it lacks. For Clark:

The orginal thought experiment strikes a nerve… It is plausible to suppose that if we seek to genuinely instantiate (not just roughly simulate) mental states in a computer, we will need to do more than just run a program that manipulates relatively high-level (semantically transparent) symbolic structures… The functionalist identifies being in a mental state with being in an abstract functional [computational] state, where a functional state is just some pattern of inputs, outputs and internal state transitions taken to be characteristic of being in the mental state in question. Imagine instead a much finer graind formal description, a kind of "microfunctionalism" that fixes the fine detail of the internal state-transitions as, for example, a web of complex mathematical relations between simple processing units. Once we imagine such a finer grained formal specification, intuitions begin to shift.

Symbols, grounding and everyday life:

Our everyday skills, which amount to a kind of expert engagement with the practical world, are said to depend on a foundation of "holistic similarity recognition" and bodily lived experience… The product of this experience is not a set of symbolic strings squirreled away in the brain but a kind of knowing how - a knowing how that cannot be reduced to any set, however extensive, of knowing-thats.