Notes on Cohen, Oates, Beal and Adams
After Dretske:
Dretske's criteria for a state being a meaningful representational state are: the state must indicate some condition, have the function of indicating that condition, and have this function assigned as the result of a learning process.
Paraphrase (always dangerous in a philosophical paper):
Learning meaningful representations, then, is tantamount to learning reliable relationships between denoting tokens (e.g., random variables) and learning what to do when the tokens take on particular values.
Formalism (always dangerous in a philosophical paper):
The minimum required of a representation by Dretske's theory is an indicator relationship s<-I(S) between the external world state S and an internal state s, and a function that exploits the indicator relationship through some kind of action a, presumably changing the world state f(s,a)->S. The problems are to learn representations s ~ S and the functions f.
How does this view relate to the critical discussion we've had so far?
They offer this statement of their research goals in theoretically neutral terms.
Most work in machine learning, KDD, and AI and statistics are essentially data analysis, with humans, not machines, assigning meanings to regularities found in the data… Our goal, though, is to have the machine do all of it: select data, process it, and interpret the results, then iterate to resolve ambiguities, test new hypotheses, refine estimates, and so on.
Is it possible that such a process would have structure - symbols, relationships to the world, norms of inquiry, criticism and creativity - that would support different bases for the attributions of meaning to representations?
The remainder of the paper consists of an examination of two implemented approaches to robot learning from the standpoint of Dretske's definition.
Shows that the output of a learning algorithm from real-world data can inform action.
Their criticisms:
Other criticisms:
Concretely, contrast with the Relocatable action models of Leffler, Littman and Edmunds 2007, with learned clustering as in Leffler, Mansley and Littman 2008. There's no time series structure there; the world model assumes that we can cluster the states to give features relevant for action dynamics directly from perceptual information. But is anything missing?
Closer to a representation:
Claim:
Structural abstraction of representation and assignment of meaning are all done by peruse [the system in question].
Points for discussion: