Monday, February 09, 2009

Physical Symbol Systems

At the university, I'm now following a course on knowledge systems. There's plenty of room for one to be sceptical about knowledge systems, in the sense that it's possible to think of general procedural programs as knowledge systems too, the knowledge system is identified by the ability to separate knowledge from processing (the inference engine). This great distinction, if followed to the letter, means that knowledge systems are only valid once knowledge is explicitly declared and not mingled with source code of the inference engine. This rules out most procedural program implementations, if not all. In a simple interpretation, I'd say that knowledge systems can only be built in Prolog.

I don't want to discuss this item for the rest of my post however. What is more interesting is the hypothesis stated by Alan Newell and Herbert Simon (picture above). They stated:
A physical symbol system has the necessary and sufficient means for general intelligent action.
This implies a couple of things. First, that intelligence can be thought of as symbol manipulation. Second, since computers can be thought of as symbol manipulation machines, computers can in theory become intelligent.

Many efforts on representing knowledge in computers generally start with some written-down term of a particular symbol. Reasoning with text however is a bit daunting, that is, the symbol itself cannot be broken down and its meaning is invoked through its connections with other symbols and concepts.

Thinking very abstractly however, symbols are representations of knowledge, and symbols need not be visual. I could for example describe a cow as 0x7A832B12 and add further different representations of the same thing. Colloquially, I could add the image of a cow, the sound and the smell and tie them together as representations of the same thing.

Symbol reasoning systems then require the computer to categorise the symbols themselves, to find ways they are equal, ways they are different and how symbols might be related to other symbols and in which way. It's even possible that the relation itself is yet another symbol.

A limitation of our thoughts might be that we rely on our language too much in order to be able to debug knowledge systems. I'm considering that language itself might not be the most efficient way to develop reasoning systems.

Another important quote that I read today is that there is no true way yet to assign meaning to symbols, however meaning can be represented in a computer, that is.

Hmm.... the ideas I have about this subject are really abstract and it's almost impossible to write them down in a sensible way, at this time. But why rely on our or any other language for a representation of our knowledge? If knowledge has different appearances, then shouldn't we let a computer choose how it decides to store it? The designer of any program makes full decisions about data, and data structures.

For learning machines though, we may need to innovate on how data and knowledge is stored, such that more complex systems could possibly use it in different ways, hopefully with the ability to derive new knowledge from existing knowledge, which seems a current hard limitation for a computer at this time.

So... goal: Build a knowledge system without a specific design for the storage of knowledge, or built it as a hybrid combination of implicit knowledge with explicit knowledge.

No comments: