Tuesday, August 26, 2008

The frozen model of reality

As soon as you've gotten out of the coding trenches as a developer, you'll soon be confronted with the need to model reality. Basically, the idea about software development is to take a snapshot in time of reality that works for the current set of requirements and then automate it into some program or system. The larger the system, the more difficult the freeze gets and thus the more frequent are the change requests or people trying to push things through.

The problem, obviously, in freezing reality is that as soon as you defrost it again, you'll need to find a new equilibrium or frozen point in time, the target. The ability to move from one frozen state to another is your flexibility of the architecture, the clarity of the solution to information analysts and everything and everyone inbetween that can put their foot in the door or wants to have anything to do with it. Successful implementations tend to attract a lot of success-share-wannabees, poor projects tend to attract a lot of attention from people that know how it's done and people that refuse to work that way.

Anyway, the problem with freezing reality and development is that you're in a stop & go process. You can never go in a continuous changing way to some new hybrid form of operation, it's always a little bit or a lot, come to a complete stop, wait, then determine your new direction. Confusing to many, since we tend to differ in opinion on the best direction to take afterwards, or even differ in opinion what the frozen state looks like or even what the soon-to-be-defrosted state should look like.

The freeze is required because development and coding is the formalization of rules and processes in a particular moment in time. Thus, software engineering is basically freezing reality as little as we need to, but as much as we should, to make us more effective from that point onwards. Luckily we still employ people that use software and can think up their own ways around us to still enable a business to grow and bring it forward, otherwise we'd really be in the ... .

Anyway, a very simple conclusion could be that any formalization of reality for a period of time is therefore subject to inflexibility in the same way that a formal representation of anything is just a particular perspective of that thing in time (and fashion?).

If you look into the problems of software engineering, the actual problems that we still encounter nowadays have not changed a single bit, but the technologies have. Any new technology comes with promises that "modeling" the enterprise with that technology is going to make it more flexible, yet it always starts with the formal chunking of reality so that a program can actually carry out some work. It's true that technologies have made it easier and faster to develop programs, mostly the 1GL, 2GL, 3GL and 4GL phase and we're getting closer to the business language due to methods of specification, but we're not changing the real method behind it, the formalization of reality at a point in time.

In order to make machines really intelligent, we should exceed our own limitations, since we depend on formalized and static models to comprehend something and from those models we re-build our solutions.

As an example, I imagine an artificial intelligent system that doesn't attempt to formally describe an object once, but reshapes and details the object as soon as more information becomes available to describe it and should probably even be able to split objects into different ones as soon as a major axis of separation (a category) becomes available to make it distinct.

Depending on who you ask in life, people give you different answers on trees. Some people know only one tree: "the tree". Other people know their pine trees from their oak trees and yet other people can identify trees by their leaf silhouette. So somewhere and somehow, we tend to further categorize items as soon as we get swamped by too many symbols in the same category. We're luckily very apt in finding specific differences between types and especially how they are common, so that the categories have valid descriptors.

But... one does not grow up and think of a tree as a pine or an oak, we think of it as a tree first, then later it is identified as a tree of a specific type. We can use smell and vision to identify a pine tree, even tactile functions. The combination of smell and vision is a very powerful identifying function, vision alone or smell alone might still throw us off.

Now, making this post a bit specific to python. Python has a process called "pickling" that is used to persist objects in storage space. In artificial intelligence, the neural network often acts in certain phases. The phase were it learns and adjusts according to feedback and a phase where it executes and recognizes new input symbols. We're too afraid to let the network run in those two modes at once. Either because we can't predict the result or because we're not using the right model for it even then. The human brain though is constantly stimulated by recognizing something it saw before that was alike, but slightly different or still "exactly the same". I believe we're constantly adjusting our images, smells and other input signal patterns as we experience them.

But without a suitable learning process on the outside that is connected to a working learning machine in such intelligence, it won't go far. In this view, I'm considering that the human brain has the innate capacity to learn about symbols, but just needs to experience the symbols to make sense of them eventually how they interact and relate to one another. It's not very uncommon to meet somebody that has entirely different viewpoints and experiences, or interpretation about their environment than you do.

Thus, the problem in A.I. at this time considered isn't necessarily so much about how we define a proper ontology (since that ontology is also based on the current snapshot and perspective we have, our "frozen model"), it's about how we define a machine that knows nothing about symbols, but has the capacity to understand how symbols (patterns from different inputs) relate together and perhaps even has the ability to reason with those symbols, but that's taking it another step further.

I'd recommend that after this post, you keep track how much detail you're observing in your everyday world in different situations. You'll be amazed how much you 'abstract away' from the environment and even though you see it doesn't mean that you notice it. And it's also possible that missing things create strange situations in which you notice that it's not there, even though you expected it to be. Or without it, it doesn't exactly look like the object as you expected. That change in observation, should and does that change your reality? Does it change how you observe the world in the future? Is that the real process of learning? Continuous adaptation of reality based on observation?

Tuesday, August 19, 2008

AI ambience or collective web intelligence?

I've been busy a lot with some administrative things. In september classes are starting and I'm commencing with A.I. Registrations are done, just register for a couple of courses and go. There's a new direction for the next year: "Human Ambience". I had quite some interest into intelligent systems, but I do like ambient intelligence as well. I think it's really done well when you don't notice it at first, but then later go: "oh, that was actually pretty cool!".

For the rest Project Dune is trucking on as usual. I'm preparing a plan for a manager of the company I work for to possibly spend a bit of budget on getting the project a little bit further, but then with the help of the effort of some colleagues. All done open source of course. So that's exciting.

I'm also looking at perhaps providing a Java API to interface with CUDA. The objective is to make it available to Java users. Not sure how to write the "java" program and compile that for CUDA use though :).

Friday, August 01, 2008

Communication, interpretation and software engineering

A majority of problems in software engineering are due to inefficient social activities, well... beyond poor estimation, poor assessment and poor verification of course. :)

Here's a very nice website I found:

http://www.radio-subterranean.com/atelier/creative_whack_pack/pack.html

It's focused on creativity, but can be applied to innovation and some of those issues can be applied to general software engineering that's simply the development of applications.

There are still quite a lot of apps out there that have not been designed properly. They start out, but from that point are already dead in the water. It's simply no use to extend it further. It may work when it's done, but every effort spent on it simply isn't worth it. "Design-dead", it's called. This can be prevented if you look for sounding boards and more experienced peers. Don't be proud, let others contribute and seek some help in what you're doing.

The other thing are assumptions or assumptions from business that turn into dreamed up requirements but eventually appear to be problems. Especially for security this could be the case.

The latter problem is mostly the lack of root analysis. Rather than directly assimilating what a person wants, it's about asking the question why something is wanted and what the root problem is. Many, many times, people come to you directly with a request to do something which they consider the resolution to their problem. They're not telling you the problem they're having. Keep on asking, identify why something is wanted and maybe it's possible to come up with a much easier alternative.

Lack of definition is another. It's difficult for some people to understand that others are not experts in the same domain. Spend a bit more time to explain your own process and activities, then see if you can develop a correct cross-over between the two domains of expertise for an optimal result.

Well, and further... It's mostly about continuous communication and verification. Going off for half a year and then coming back with an end result is bound to give a lot of deviations from ideas. Maybe the ideas were wrong in the first place, maybe the interpretation. It doesn't matter at that point.