Sunday, February 20, 2011

Philosophy of Mathematics

In my line of work, I'm often confronted with people that face problems and want to have resolved. In some of these problems, mathematics are an essential part in the resolution of these problems. In some private part of research, I'm trying to find the real origins of intelligence and find myself going way back and forwards in time, space and mathematics, trying to come up with the answers. Some of the questions that are popping up sometimes is that there may be things incomplete about the language of mathematics itself, rather than failure in trying to find the right set or sequence of equations / formulas to apply. A lot of research over the past decennia in Artificial Intelligence has produced enormous amounts of very important and interesting applications, but none of these I find exhibit a strong sense of generality in their applications, which allow the same technique to be used over and over and over again in different situations. Most AI applications require hard-wired components of machinery in order to provide any solution.

This causes one to go back in time to try to find the origins of mathematics, in search for an answer whether maths by itself is (eventually) inherently limited and whether there's a bound for reality, a bound for mathematics or whether both of these worlds will run parallel forever (they are complementary forever), or whether the abstract thought being developed in mathematics will eventually diverge from reality by so much, that we're now dabbling in the abstract model itself to find both problems and solutions within that model, even there's no physical counterpart that would be subject to the abstract problem.

If you look at civilizations as they develop language, at some point in their language they start to associate a "count" of something with a body part. Some civilizations evolve this further to start using more abstract tokens like sticks to count beyond the maximum number of body parts you may have. In simple societies, it is unlikely you need more than the number of parts on your body to explain some concept (you could also modify the definition of how you refer to something). Those which do evolve, eventually use abstract representations to refer to some abstract notion as a "count". This "count" has no other concept other than our visual perception of being some number of concepts.

The numbers 0-9 as we know them now have evolved over a rather long period of time and came to us from India and Arabia. The number system is base-10, which allows for relatively very easy manipulation of the numbers during calculus. For this reason, they were eventually adopted and used over the Roman glyphs that dominated, for example, in Italy at the time.

The reason why numbers became useful are related to trade. The problem with trade is that you need to figure out how much to give of this for how much of that. So the practical problem required some way to refer to some 'count' of this and some 'count' of that, also some notion eventually that 'x' of this equals 'y' of that. Hence, the bartering and trading very quickly gave birth to the notion of equality and thus the equation.

Geometry evolved after that and served to be able to make rather precise calculations about areas of land, as well as how to carve and build appealing feats of engineering, build houses, bridges, etc. Even though not all forms and shapes could be accurately described at that point, there were some basic rules that could be used already to help out in the engineering effort. It is for these purposes already necessary to think in terms of half objects or fractional objects, like a third of a pie or two-thirds down a bridge. Engineering also required the use of unknowns

As you may notice until now, the roots of mathematics are housed in the manipulation of the 'counts' of things... how many meters, how many pears, how many of that for ... .

Then Newton came along and decided to use equations not just for static problems, but dynamic problems like apples falling from a tree. And here we also notice introductions of for example the differential equation. What the differential actually does is chop up some event over a larger period of time into many smaller parts, analyze their behavior in these smaller parts and develop a new equation that exhibits how the system changes over some time assuming that there is not significant deviation within that system. For a singular system, i.e. one that does not interact in the abstract model it is given with any other system, this kind of mathematics is very well suited for solving problems.

After Newton, a lot of new discoveries were made primarily on the side of physics. We do not only know how to count cows, trade land and figure out how far something is, but we can also use it to describe movement and how things move in space over time (however, with important assumptions). With Newton and the mathematics thereafter, people started to feed more abstract ideas into the language. Take into account that for every addition to this language, the deliberations have to be tested against the axioms of the language itself in order to provide consistency.

The problem with more abstract ideas is that some notions may have no reality counterparts, or that the elements that they describe in theory cannot be measured because they are either too small or too big (infinity is one such example). Just thinking about infinity itself and whether it existed or not has driven people mad (literally!).

Newtonian equations work very well for situations in which you assume a disturbance and the rest of the system is free of distortions for a certain length of time and this system has consistent and homogeneous properties (friction, etc.). But for different systems, even a very simple pendulum where you deal with oscillations, even the single system without a second interacting pendulum can only be practically computed to some degree of accuracy. That is, the real exact solution is the elaboration of some power series, depending mostly on the amplitude of the system.

So there exists already a rather simple dynamic system for which there's no real exact solution possible, because the power series extends towards infinity. If we use a supercomputer to compute the exact result, we'll never be able to calculate the result solution before the point in time we'll need it. And yet... looking at the real world and looking at a pendulum swing, there's the thing doing it. What's causing this inherent problem in mathematics, where it cannot be used with 100% accuracy on a pendulum (given some assumed mass), but can be used very precisely on the exchange of goods on a market?

There's something about mathematics that's horribly incomplete yet and it's something to do with recursion in mathematics. We need an ability to compute the outcome of recursion sets very, very quickly. The above demonstrates that the model of the real world of mathematics is really just a model and breaks down for certain practical uses of mathematics, depending on the complexity of the situation.

1 comment:

Brian Stusalitus said...

With the energy you put into research I kindly ask you to research magnet motors. There are many shown on YouTube and elsewhere on the internet. I'm sure you'd be able to understand what to do with this knowledge.