Thursday, September 04, 2008

A.I. is about search

Well, so far I've enrolled on a couple of courses. One of them being "A.I. Kaleidoscope". It's a great course with very good course material and exceptional course material from the professor.

The book I'm reading, called "Artificial Intelligence", is very well written and highlights a couple of philosophical understandings as well as explains mathematical underpinnings of A.I. that have established so far. So it's quite a broad area it is discussing.

One of the statements I come across is that A.I. is about searching problem spaces. Whereas some problems have algorithms, other problems have a state space, where states are mapped onto and where transitions from one state to another are shown as arcs in a graph. Bringing the discussion to graph theory and trees. And breadth-first and depth-first searches, heuristics, and so on.

The idea is thus that A.I. is about mapping knowledge within a certain domain and understanding the phases or steps that an expert goes through in order to come to a reasonable conclusion (reasonable meaning not necessarily optimal, but certainly acceptable).

In previous posts, I sometimes discussed that we human beings aren't necessarily purely rational, but act emotionally as if we're programmed. We think we're exceptionally clever though. Well, another part of the book discusses the fact that we only consider things intelligent that act in ways that we ourselves would and could do. You could argue for example that intelligence shouldn't be subject to such a "narrow?" definition. But philosophically, there is no common agreement on the actual definition of intelligence, so this discussion isn't that useful at this time (within a blog that is).

I'd like for the moment to disconsider the general "folk" consensus that intelligence is solely determined by human observation and imitation (dolphins are considered intelligent because they seem able to have intricate conversations in their speech and behave in seemingly human ways to our stimulus and interactions). Taking thus a slightly wider interpretation of intelligence, and accepting the statement that "A.I. is about search", you can only conclude that Google built the most intelligent being on the planet. It's capable of searching through 70% of the internet at lightspeeds, you always find what you're looking for (unless it's too specific or not specific enough) and so on. Now, perhaps the implementation isn't necessarily intelligent, but the performance of the system surely demonstrates to me, using my browser, that it's a very intelligent system.

One important branch in A.I. is about "emergence", something I blogged about recently. It's when simple individuals within their own little context and environment execute actions, which within a greater context build up to a very intricate system that no individual could control, but together displays highly sophisticated attributes of intelligence. An example could be free market mechanisms. You could say that the information that a single individual has to control the logistics of vegetables in a single city would be limited, and most likely a single individual couldn't optimize this task. But all vegetable sellers in a certain city are very likely to be apt in optimizing their local inventory in such a way that it has least waste and optimal profit. Optimal profit means having just enough for people in their local environment to benefit, but not too much to have it thrown away.

These "agents" as they are called in A.I. act on their immediate environment. But taken together on a higher level, their individual actions contribute to a higher level of intelligence or optimization than possibly a single instance, computer, individual or thing could be if they were to understand the entire problem space, understand it and optimize in it. The core of the above is that many intricate and complex systems consist of simple agents that behave according to simple rules, but by consistently applying these rules, they can achieve "intelligence" that far exceeds their individual capacity.

So, A.I. does seem to be about search, but it's not about finding the optimal. Maths is about finding optimals and truths, it's an algorithm, thus (must be/needs to be) absolute and consistent. A.I. is about a problem space, possible solutions and trying to find optimal solutions (applying "intelligence") as best as you can, but always taking into account the cost to get there.

Humans don't always find optimal solutions to problems. They deal with problems at hand and are sometimes called "silly" or "stupid" by other humans (agents).

One of the things I liked about the book is that "culture" and "society" are instrumental to intelligence. It clearly suggests that there's a need for interaction for intelligence to occur. In fact, for intelligence to exist. It highly suggests that intelligence is thus cultural, but also infused and created by the culture itself.

If A.I. is about search, and more recent posts are about semantic models, where does this leave neural networks? I think the following:
  1. You can't build a human brain into a computer due to memory, bandwidth, cpu and space constraints. So forget about it.
  2. A.I. shows that you can model certain realities in different ways. There are known ways to do this through graphs, but those graphs have too harsh and clear relationships between them. They should be softer.
  3. Searching a space doesn't exclude the possibility of indexing knowledge.
  4. Relational databases may have tables that have multiple indices. Why not knowledge embedded in A.I. systems with multiple entry points, based on the input sensor?
Thus... what if we imagine an A.I. which behaves unlike the human brain but in other ways like it, uses multiple "semantic" indices for interpreting certain contents and contexts?

Latent Semantic Indexing is a technique to describe a certain text and then give it some sort of index (rating)?. You could then do the same to another piece of text and compare the two. The rate to which the two are alike is a certain score for the similarity. Thus, LSI could serve as a demonstration of the technique for semantic indexing (and possibly storage) of other receptors as well (sensors/senses).

Imagine that a computer has access to smells (artificial nose), images (camera), audible sounds (microphone) and so on and it has the ability to maintain a certain stream of this information in memory for a certain amount of time. The information together is a certain description of the current environment. Then, we code the current information using an algorithm yet to be constructed such that it can be indexed. And we create a symbol "A" in the table (the meaning) and create indices for smell, vision and hearing to point to A. Any future perception of either the smell, or the vision or the hearing might point to A, but not as strongly as when all indices point to it (confusion).

The problem space in this example is more limited to the combination of the senses and what it means and searching for possible explanations within each "sense" area.

The difference with more classic A.I. is that the classic version attempts to define context and define reality IN ORDER to classify it. The above version doesn't care much about the actual meaning (how we experience or classify it with our knowledge after x years of life). It cares about how one situation is similar to another one. In that sense, the definition of meaning is about how similar some situation is to another.

Now... if the indices are constructed correctly, similar situations should be close to one another. Thus, a computer should be able to quickly activate other memories and records of possibly similar situations.

No comments: