Wednesday, August 30, 2006

Investment of time

I've been thinking about investment opportunities, seeing how companies in Silicon Valley waste their money on the ideas that come up, etc... For investment people, what it boils down to is the knowledge that the money they spend is recuperated later somehow and in what timeframe this is supposed to happen. The next rant is about software in particular.

For investment to be successful, a certain amount of confidence and what is called in Dutch "koffiedik kijken" (looking at the coffee residue left in a cup of coffee) is necessary. This confidence is created through market research, you need to know how the product will be perceived in the market, the value that the product will generate for the consumers and how the company expects to extract some part of that value for itself, mostly. Other important aspects are capable management that is focused on making profit (not on the product) and of course the skills to bring the product out in the first place.

The investment in Brazil seems of a different nature. Money isn't exactly lying around on the streets, plus there do not seem to be any private investment funds here and there that have money to offer for good ideas.

But next to money investments, there exist other types of investments. One of those is the investment of time. Time investments are personal commitments to pursue a particular product with the belief that later on, this product is going to be perceived very well in the market. Rather than actually requiring money and hiring people, you use a "risk-job market" approach where you request people to spend time on some product implementation with the promise that if the product does take off, they will be renumerated later on. If not, bad luck (but the experience will teach you a lot, if you get into the why's and nots)

In this model, the requirements for market assessment still hold and this makes the contributors for a project personally responsible for knowing how a market works, psychology factors, etc... Because if you don't, you might just spend your time (your personal investment) on the wrong project.

So, the question in this model for investment lies in the way how you get the market information (talk to whom? learn about economics?) and the question of which idea is going to be successful and how you think you are going to use your current skills (or learn new ones) for contributing to that project. Eventually, you might find that you're constantly chasing up on new opportunities, once you have found a way to make this a repeatable successful process and previous risks taken have generated recurrent revenue for yourselves (becomes your job to take risks in normal worktime).

Thursday, August 24, 2006

Rational Ignorance

Just reading up on some issues of human development, voting and democracies. Found the following term on wikipedia, which I think is quite applicable for a non-participatory democracy:

http://en.wikipedia.org/wiki/Rational_ignorance

This is just like the way we adopt a movie critic that likes movies that we like. In this way we let the political party, the politician, or the movie critic do the "heavy lifting" for us as we spend our time doing our job, raising our family or just lying on the beach.
That sort of sums it up. I was getting here through this link:

http://www.cidh.org/relatoria/showarticle.asp?artID=277&lID=1

which is a report on the importance of freedom of expression. One of the most important lines, I strongly believe, is the following:

"b) its instrumental role in enabling individuals to express their claims and call political attention to them, including their economic needs;"
which could be used to redefine poverty not just as a deprivation of materials and goods, but more as a deprivation of political power, resulting in the lack of materials and loss of dignity.

thoughts continue from here....

Wednesday, August 23, 2006

Operating Systems

I'm recently researching how an OS really works and getting my feet wet in Assembly seriously, for the first time. Assembly isn't really that hard, after all :)

I've written a bootloader so far, a 2nd stage loader and some parts of the kernel, but all without a file system. That's right, file systems all need to be programmed.

The following link is an excellent tutorial on os development. It shows exactly what the first steps are into getting the kernel ready. It assumes the use of GRUB to load this kernel and you can follow through the example with standard Linux and the ability to mount a single file as a file system on your /mnt. (then you need to create some kind of image still, but check out qemu for more information).


For my research, I am using mostly x86 instruction sets, no other processors yet. It all starts out pretty easy. The boot sector of the booting device (512 bytes) is loaded into memory at 0x7C00. Then it starts executing. In those 512 bytes, you basically load a 2nd stage loader from a determined memory location and then you have more bytes to fiddle around with the system. Any file systems used on the system need to be implemented separately.

Eventually, the kernel is loaded. This can already be a 32-bit protected mode kernel or perhaps starts off in 16-bit. I'm assuming 32-bit PM (cause that's how GRUB eventually calls the kernel) and then you'll generally do the following:
  • Set something that is called "Global Descriptor Table". This is a table with pointers to code segments, data segments, etc... in 32-bit PM.
  • Set an interrupt descriptor table. This is for the processor to indicate to the kernel that some operation on the processor failed (exceptions) and also to indicate to the kernel that some hardware has some new data to process. For example, hitting a key on the keyboard creates that interrupt.
  • Initializing video
  • Initializing keyboard
  • Running a while loop that continuously hlt's the processor until the next interrupt occurs.
Now, there is a chip on the motherboard called a PIT. This is a Programmable Interrupt Timer. This timer will frequently generate interrupts for the processor to handle. The kernel will typically execute a schedule function in this timer interrupt, see if any task is trying to get CPU time and if not, hlt the processor until the next scheduled event occurs.

Hardware may also cause an interrupt to occur, in which case the applications on top of the kernel will need to receive this interrupt events and either type the character in this edit box or another, depending on the focus.

Anyway, that's where OS's take off from. It receives hardware interrupts and timer interrupts, schedules the tasks with CPU time-slices and divides the CPU between those tasks. How this is actually implemented is mostly determined by the OS designer. But it's all not tooo difficult really (to understand the concept, not the implementation).

The applications may run in their own "application-space" with virtualized memory. That means that memory is paged in pages. From the point of view of the application, it is running in an area of memory of "0x0000:0x0000". For development purposes, this makes it very easy. It might of course actually be located somewhere else.

The CPU can be instructed to not let this application access memory outside a certain range. If the application attempts, the CPU will raise an "access violation" (as you may be familiar with on NT for example). And voilá!!! The kernel will need to treat the application and the error. So the OS very closely uses the hardware capabilities of the CPU to do all those tasks, including divide by zero, etc... It's not all too difficult to see that concept now.

Some parts of the kernel need to be written in asm, cannot be done in C. For example loading the gdt mentioned before, the IDT and some other operations like processor locks in SMP programming and spinlocks (atomic operations).

What the kernel really does is also up to the developer. You could consider writing an OS that only does databases (dedicated machine for databases), or you might want to develop a gaming OS (with direct hardware access for example with standardized hardware to make things simpler). There are no limits!

Friday, August 11, 2006

Imagining Software Ecology


I'm involved in a university course here in Recife where the objective is to develop software using processes established in an opensource software factory. Our project is developing software for a software cooperative to assist in the day-to-day operations. See link for more info.

The software cooperative by itself is an interesting company to look at. Already I have found various examples on software cooperatives, like Solis, Beluga or CoopSoft. Here is another story about software developed for co-ops and the problems they encountered in making it suitable.

What I think the future will bring is some sort of "software ecology". Software itself will be seen as something so stupidly down-to-earth, eventually, that it doesn't make sense to charge money for it. The thing that matters is what the software does for a user. So, if a user looks into this ecology of software, he may like a certain piece but not be able to use it yet, because one specific need is not answered.

Someone else may adapt the software to that need and then return the modifications into the software's project. This kind of ecology would have immense capabilities for software re-use and software evolution. It is not necessarily a problem that you cannot capitalize on the software itself. (however, neither do I, in this post, suggest a resolution on how to make money with this model). Software ecology needs plenty of information around it. Not the kind like SourceForge (pull), but indexing, points of contact, etc... Something like a search engine where you get all sorts of details about that particular software project.

See software in the future more or less like a living lifeform (biology). It has a very high level of evolution, it reproduces itself, it can split cells (fork) and it is very important for our society to function nowadays... continuing thoughts here....

Tuesday, August 08, 2006

Game development ideas...

I've been writing a couple of applications before, some of which were renderers of Quake BSP3 levels. The work itself is very interesting to understand the graphics pipeline and the decisions made in graphics engines.

You may be looking for tools to break up levels, or examples how to render QBSP3 levels in Java. There are sites available where you can get a lot of information about game programming in general, or more specifically OpenGL programming, in the form of tutorials, and you need to understand the QBPS3 file format.

One of the problems in game development, when starting, is not really the ability to visualize a certain level. I've done it myself a couple of times, but one of the deterrants in advancing quickly is to have proper editors in place that you can use to read in the data the way you want it.

One of the things I did not have access to easily was QERadiant, because it was not GPL. I wanted to eventually make changes to it and so forth, which was not easy. This editor will probably only help you if you wish to develop BSP-like game clones. It is used for HalfLife, Quake, Soldier of Fortune, etc...

If you're into terrain-type games, things get much more complex. But maybe these guys can help. :) It shows clearly how partnerships in the gaming industry get more and more important.

One guy at work has some interesting thoughts on game development overall and wonders why games are still developed totally closed. Only certain parts of the game get opened up, level-editing and SDK's mostly, but a significant portion of the game (either the engine or physics system) never gets opened.

What if the game development companies attempt to employ an opensource development model? Blizzard in a way has all the resources available to make a lot of this happen, but controls to a very large extent the world, server and game rules. An active OS community may help to develop the IP of the company. Giving away control will be a difficult process, because it means opening the system up to the community itself and competitors. Maybe binding the engine to the servers only, but releasing *all* other tools to the community would be a nice option.

On the other end, maybe certain graphics engines may have a dual licensing made available plus the sources. The bigger titles that make the money already become so well-known, that it is not any effort to trace those guys down and sue them, if necessary. Perhaps that cost of law suits against the cost of protection is much easier to achieve.

Here's one example of an online community using the crystalspace engine. It has taken them years to develop the game to the point where it is now, probably also due to the huge refactorings that have to take place when either the engine refactors or hardware simply evolves.

There is plenty of criticism on current games with regards to FPS clones. Very little creativity in developing games, just altering storylines, GUI's, engines & eye candy and game objectives, or make it multi-player.

One of the more interesting ideas would be to develop a game where you collaborate with others in different ways in different roles. Looking at Operation Flashpoint for example or some of the more recent games, the team would already have people with different capabilities, but teamwork only really develops in clans. The rest just want to "play" I guess.

If you imagine a game where you are a truck driver or a boat captain where somebody else manages cargo from A to B (the planner), then everybody joining the game gets transported immediately to a leading position and take off. It may have a 2D interface for planning and 3D interfaces for the different transport vehicles. That would make a very interesting creative and innovative platform. It doesn't really have to be very complicated (with lots of eye-candy), because a rather crude system in the beginning would do.

Mix this maybe with having different competing transporting companies and it gets very interesting. Especially bringing in financial markets and mergers and so on. Then the game can be played on many different levels and would teach children a lot of interesting things of running businesses and the economy!

Tuesday, August 01, 2006

Autonomy in knowledge work

From Peter Drücker's book:

Knowledge work requires both autonomy and accountability

Demanding of knowledge workers that they define their own task and its results is necessary because knowledge workers must be autonomous. As knowledge varies among different people, even in the same field, each knowledge worker carries his or her own unique set of knowledge. With this specialized, unique knowledge, each worker should know more about his or her specific area than anyone else in the organization. Indeed, knowledge workers must know more about their areas than anyone else; they are paid to be knowledgeable in their fields.

What this means is that once each knowledge worker has defined his or her own task and once the work has been appropriately restructured, each worker should be expected to work out his or her own course and to take responsibility for it. Knowledge workers should be asked to think through their own work plans and then to submit them. What am I going to focus on? What results can be expected for which I should be held accountable? By what deadline?

Knowledge work requires both autonomy and accountability.

ACTION POINT: Write a work plan that includes your focus, desired results and deadline. Submit it to your boss.