Tuesday, August 26, 2008

The frozen model of reality

As soon as you've gotten out of the coding trenches as a developer, you'll soon be confronted with the need to model reality. Basically, the idea about software development is to take a snapshot in time of reality that works for the current set of requirements and then automate it into some program or system. The larger the system, the more difficult the freeze gets and thus the more frequent are the change requests or people trying to push things through.

The problem, obviously, in freezing reality is that as soon as you defrost it again, you'll need to find a new equilibrium or frozen point in time, the target. The ability to move from one frozen state to another is your flexibility of the architecture, the clarity of the solution to information analysts and everything and everyone inbetween that can put their foot in the door or wants to have anything to do with it. Successful implementations tend to attract a lot of success-share-wannabees, poor projects tend to attract a lot of attention from people that know how it's done and people that refuse to work that way.

Anyway, the problem with freezing reality and development is that you're in a stop & go process. You can never go in a continuous changing way to some new hybrid form of operation, it's always a little bit or a lot, come to a complete stop, wait, then determine your new direction. Confusing to many, since we tend to differ in opinion on the best direction to take afterwards, or even differ in opinion what the frozen state looks like or even what the soon-to-be-defrosted state should look like.

The freeze is required because development and coding is the formalization of rules and processes in a particular moment in time. Thus, software engineering is basically freezing reality as little as we need to, but as much as we should, to make us more effective from that point onwards. Luckily we still employ people that use software and can think up their own ways around us to still enable a business to grow and bring it forward, otherwise we'd really be in the ... .

Anyway, a very simple conclusion could be that any formalization of reality for a period of time is therefore subject to inflexibility in the same way that a formal representation of anything is just a particular perspective of that thing in time (and fashion?).

If you look into the problems of software engineering, the actual problems that we still encounter nowadays have not changed a single bit, but the technologies have. Any new technology comes with promises that "modeling" the enterprise with that technology is going to make it more flexible, yet it always starts with the formal chunking of reality so that a program can actually carry out some work. It's true that technologies have made it easier and faster to develop programs, mostly the 1GL, 2GL, 3GL and 4GL phase and we're getting closer to the business language due to methods of specification, but we're not changing the real method behind it, the formalization of reality at a point in time.

In order to make machines really intelligent, we should exceed our own limitations, since we depend on formalized and static models to comprehend something and from those models we re-build our solutions.

As an example, I imagine an artificial intelligent system that doesn't attempt to formally describe an object once, but reshapes and details the object as soon as more information becomes available to describe it and should probably even be able to split objects into different ones as soon as a major axis of separation (a category) becomes available to make it distinct.

Depending on who you ask in life, people give you different answers on trees. Some people know only one tree: "the tree". Other people know their pine trees from their oak trees and yet other people can identify trees by their leaf silhouette. So somewhere and somehow, we tend to further categorize items as soon as we get swamped by too many symbols in the same category. We're luckily very apt in finding specific differences between types and especially how they are common, so that the categories have valid descriptors.

But... one does not grow up and think of a tree as a pine or an oak, we think of it as a tree first, then later it is identified as a tree of a specific type. We can use smell and vision to identify a pine tree, even tactile functions. The combination of smell and vision is a very powerful identifying function, vision alone or smell alone might still throw us off.

Now, making this post a bit specific to python. Python has a process called "pickling" that is used to persist objects in storage space. In artificial intelligence, the neural network often acts in certain phases. The phase were it learns and adjusts according to feedback and a phase where it executes and recognizes new input symbols. We're too afraid to let the network run in those two modes at once. Either because we can't predict the result or because we're not using the right model for it even then. The human brain though is constantly stimulated by recognizing something it saw before that was alike, but slightly different or still "exactly the same". I believe we're constantly adjusting our images, smells and other input signal patterns as we experience them.

But without a suitable learning process on the outside that is connected to a working learning machine in such intelligence, it won't go far. In this view, I'm considering that the human brain has the innate capacity to learn about symbols, but just needs to experience the symbols to make sense of them eventually how they interact and relate to one another. It's not very uncommon to meet somebody that has entirely different viewpoints and experiences, or interpretation about their environment than you do.

Thus, the problem in A.I. at this time considered isn't necessarily so much about how we define a proper ontology (since that ontology is also based on the current snapshot and perspective we have, our "frozen model"), it's about how we define a machine that knows nothing about symbols, but has the capacity to understand how symbols (patterns from different inputs) relate together and perhaps even has the ability to reason with those symbols, but that's taking it another step further.

I'd recommend that after this post, you keep track how much detail you're observing in your everyday world in different situations. You'll be amazed how much you 'abstract away' from the environment and even though you see it doesn't mean that you notice it. And it's also possible that missing things create strange situations in which you notice that it's not there, even though you expected it to be. Or without it, it doesn't exactly look like the object as you expected. That change in observation, should and does that change your reality? Does it change how you observe the world in the future? Is that the real process of learning? Continuous adaptation of reality based on observation?

1 comment:

Hemant Saluja said...

Hello...
I like this frozen model of charity and it is very difficult to go there because it is freezing.

===================================
Anuj
look4ward