Lecture 1 Quotes

Lecture 1 Quotes

What computational thinking is.

From external link: Jeannette Wing's article on Computational Thinking:

Computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the concepts fundamental to computer science… Computer science is the study of computation - what can be computed and how to compute it. Computational thinking thus has the following characteristics:

Conceptualizing not programming. Computer science is not computer programming. Thinking like a computer scientist means more than being able to program a computer. It requires thinking at multiple levels of abstraction…

A way that humans, not computers, think. Computational thinking is a way that humans solve problems; it is not trying to get humans to think like computers. Computers are dull and boring; humans are clever and imaginative. We humans make computers exciting. Equipped with computing devices, we use our cleverness to tackle problems we would not dare take on before the age of computing and build systems with functionality limited only by our imaginations…

Ideas, not artifacts. It's not just the software and hardware artifacts we produce that will be physically present everywhere and touch our lives all the time, it will be the computational concepts we use to approach and solve problems, manage our daily lives, and communicate and interact with other people.

How to approach computational problem solving

Like any other problem-solving, with common sense.

From Polya's How to Solve It:

Solving problems is a practical skill like, let us say, swimming. We acquire any practical skill by imitation and practice. Trying to swim, you imitate what other people do with their hands and feet to keep their heads above water, and, finally, you learn to swim by practicing swimming. Trying to solve problems, you have to observe and to imitate what other people do when solving problems and, finally, you learn to do problems by doing them.

Trying to find the solution, we may repeatedly change our point of view, our way of looking at the problem… Our conception of the problem is likely to be rather incomplete when we start the work; our outlook is different when we have made some progress; it is again different when we have almost obtained the solution…

First we have to understand the problem; we have to see clearly what is required. Second, we have to see how the various items are connected, how the unknown is linked to the data, in order to obtain the idea of the solution, to make a plan. Third, we carry out our plan. Fourth, we look back at the completed solution, we review and discuss it.

How to approach computational problem solving in teams

Teamwork involves effective communication, diverse expertise, and a technical framework that facilitates the collaboration.

From Herbert Simon's description of his initial collaboration on AI with Al Newell and Cliff Shaw in Models of My Life:

Al probably talked more than I; that is certainly the case now, and I think it always has been so. But we ran those conversations with the explicit rule that one could talk nonsensically and vaguely, but without criticism unless you intended to talk accurately and sensibly. We could try out ideas that were half-baked or quarter-baked or not baked at all, and just talk and listen and try them again…

We agreed to meet each Saturday, roaming on those occasions over a wide range of topics - particularly problem solving and the chess language Al was trying to devise. Al tended to supply ideas starting from the language and computer end, I starting from human problem solving and what we knew of the heuristics there. This is one of the role specializations that, subject to strong qualifications, we mildly adhered to for some years. In the course of these discussions, we considered illustrative problems from areas other than chess, including Euclidean geometry, Katona-type matchstick problems, and symbolic logic…

Considerable attention was meanwhile being given to the programming language required for the project. On the basis of their previous experience, Cliff and Al knew that it would be difficult to write our programs directly in the machine language of the computer. In artificial intelligence programs, you cannot predict what data structures the system will need to build and store, or how these structures will interact and be modified in the course of the computation. The utmost flexibility is required, and information stored in memory must be indexed in ways that will make it accessible whenever needed…

A memo written by Al on April 2, 1956, marks a major breakthrough - the use of an association memory in the form of "list structures" to make search dimensionless. The idea has a dual source in machine technology and in the idea of human association nets, and extended to both networks of lists and lists of descriptions. Al and Cliff solved the implementation problems soon thereafter.

Knowledge as a theme of the class

From Newell's AAAI Presidential Address, The Knowledge Level:

  • Knowledge is intimately linked with rationality. Systems of which rationality can be posited can be said to have knowledge. It is unclear in what sense other systems can be said to have knowledge.
  • Knowledge is a competence-like notion, being a potential for generating action.
  • The knowledge level is an approximation. Nothing guarantees how much of a system's behavior can be viewed as occurring at the knowledge level. Although extremely useful, the approximation is quite imperfect, not just in degree but in scope.
  • Representations exist at the symbol level, being systems (data structures and processes) that realize a body of knowledge at the knowledge level.
  • Knowledge serves as the specification of what a symbol structure should be able to do.
  • Logics are simply one class of representations among many, though uniquely fitted to the analysis of knowledge and representation.

Motivating the technical materials we will cover

From Newell's AAAI Presidential Address, The Knowledge Level:

Hands-on knowledge:

[The practice of AI] represents an important source of knowledge about the nature of intelligent systems.

Observing our own practice - that is, seeing what the computer implicitly tells us about the nature of intelligence as we struggle to synthesize intelligent systems - is a fundamental source of scientific knowledge for us. It must be used wisely and with acumen, but no other source of knowledge comes close to it in value.

Computation as problem solving:

The second view [of the enterprise of AI] is the functional components that comprise an intelligent system. There is a perceptual system, a memory system, a processing system, a motor system, and so on. It is this second view that we need to address the role of representation and knowledge.

Weeks one and two.

Data structures, languages and interpreters:

It is clear to us all what representation is in this picture. It is the data structures that hold the problem and will be processed into a form that makes the solution available…

We also understand, though not so transparently, why the representation represents. It is because of the totality of procedures that process the data structure. They transform it in ways consistent with the interpretation of the data structure as representing something. We often express this by saying that a data structure requires an interpreter, including in that term much more than just the basic read/write processes, namely, the whole of the active system that uses the data structure.

Weeks three and four.

Computability, generativity, and universality in computation:

The underlying phenomena is the generative ability of computational systems, which involves an active process working on an initially given data structure. Knowledge is the posited extensive form of all that can be obtained potentially from this process. This potential is unbounded when the details of processing are unknown and the gap is closed by assuming (from the principle of rationality) the processing to be whatever makes the correct selection.

All that is needed [to reason about another agent at the knowledge level] is a single general purpose computational mechanism, namely, creating an embedding context that posits the agent's goals and symbolizes (rather than executes) the resulting actions. Thus, simulation turns out to be a central mechanism that enables the knowledge level and makes it useful.

Weeks three and four.

Abstraction, modularity, and the physical realization of computers:

Each level is defined in two ways. First, it can be defined autonomously, without reference to any other level… Second, each level can be reduced to the level below… Neither of these two definitions of a level is the more fundamental. It is essential that they both exist and agree.

Computer system levels are a reflection of the nature of the physical world. They are not just a point of view that exists solely in the eye of the beholder. This reality comes from computer system levels being genuine specializations there is not always a high-level description corresponding to lower-level ones, rather than being just abstractions that can be uniformly applied.

Weeks five and six.

Representations, Insights and algorithms:

A good example is our fascination with problems such as the mutilated checkboard problem. The task is to cover a checkboard with two-square dominoes… The problem is to do it on a (mutilated) board which has two squares removed, one from each of the two opposite corners. [The task is to show that this is impossible.] This goes from apparently intractable combinatorially, if the task is represented as all ways of laying down dominoes, to transparently easy, if the task is represented as just the numbers of black and what squares that remain to be covered.

Weeks seven through ten.

Knowledge and the characterization of intelligent behavior:

To treat a system at the knowledge level is to treat it as having some knowledge and some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates [the principle of rationality].

Knowledge, in the principle of rationality, is defined entirely in terms of the environment of the agent, for it is the environment that is the object of the agent's goals, and whose features therefore bear on the way actions can attain goals. This is true even if the agent's goals have to do with the agent itself as a physical system. Therefore, the solutions are ways to say things about the environment, not ways to say things about reasoning, internal information processing states, and the like.

Weeks eleven through fourteen.