Wednesday, October 12, 2005

Data-centric architecture and design

JSP is a great 'scripting' language for quickly creating versatile views in semi-Java code. However, I do recognise a couple of challenges with JSP. Eclipse helps with a parser/lexer a great deal, but still it does not prevent problems in the following areas:

- Loss of compile-time checks on method gets and sets. It would be ideal if the template would pre-generate Java code, so that I wouldn't lose much time with silly mistakes. Beans are simply not that great for this and time is lost due to loss of compile-time checks.
- Unclear which objects are available for use. The feeling of which API's can be used inside a big blob of text without abstract objects or interfaces is a bit challenging.
- The variety of tag-libs available and how the tags, attributes etc. always seem to be wrong or incompatible.

I've been thinking as well that web applications seem to be written from the perspective of functionality, *what* you want to do with the data, not written from the perspective of the data itself and then wrapping the right functionalities around that. Isn't this exactly the inverted world?

I'm working on a new research framework where data exists at the heart of the system and everything else revolves around that. It does use certain patterns in development, like Front Controller etc. and I do use reflection and beans, but the main attempt is to shield developers from having to declare "field X" in the code somewhere, which always at some point seems to map incorrectly.

This model revolves around the "DataObject" java code. I intend to use XDoclet to auto-generate certain chosen functionalities and go back to standard Java classes. Also, I intend to bind this to Hibernate for persistence. The idea is then that every object will have certain 'functions' associated with it. These are more or less generic ( view/edit/add/delete ). By choosing which functions you want for that object, you could theoretically auto-generate the code that we know have to write by hand. Sounds interesting so far?

One of the reasons of using JSP is that it makes you very versatile in choosing layout. Reasons why 'standard visualization' components fail is because they are not flexible enough. My intention is to merge data objects with HTML template snippets and XML attribute settings for specific types. The output is a Java class that becomes part of the compilation cycle. This class exists in a pre-determined package, chosen in the declaration of the Java object. Since generating HTML with snippets for the page as a whole becomes too complex, I intend to use AJAX to only update one particular division on the page, much like some kind of "portlet" approach.

Now, through Hibernate, it becomes very easy to 'load' objects, even if the framework has no knowledge about what that object truly is. By declaring a data object in a configuration rather than the connected objects like forms or actions ( forms and actions can be auto-generated ), it gains the benefit that the framework can find or populate the object based on reflection/description in Hibernate and pass it to a business rule system.

The effective idea is to take out all ~housekeeping~ code from your web application. You'd basically create a couple of HTML snippets that dictate how your layout looks for the majority of your 'data'. Then you'd typically create a dashboard or portal front-end, where you place your various applications ( nothing stops you from using AJAX to 'replace' apps where you see fit ). Then the framework automatically generates links on pages for common actions based on your template ( using some 'decorator' classes for instance, this can be coded further ). Attach a property XML file with default settings and possible overrides for "CLASS" or whatever attributes and the framework is almost ready.

Since it is already possible to construct any object that the framework not yet knows about, validation is easier. The validation component can accept the object of already the proper type. ( Type checking must already conform, so why validate on that? ). Then pass the object to the validator, which does certain checks on length, etc. The validator returns true if ok, false if not. If ok, then the business rule executes.
( Does the user have access to this functionality? Related to object A, does this setting make sense? etc. ). If that passes, the object is automatically committed. The rule knows from itself what the action is on the object.

In the future I intend to add some wizards, so that it becomes even easier to have a particular page flow.

So, data-centric architecture is what I'd like to call where data is defined and actions revolving around operating on that data are already well-known. Why re-write that code constantly? The only thing one needs to do is call certain business rule logic, some validation logic and some view logic that is auto-generated. The win with having Java classes back is that property getters and setters are verified at compile-time, plus hopefully a bit of a speed-up ( not that it is needed ) in order to generate the pages. Using "Spring", it should become a real fun-fair getting this framework going ( choosing the implementation rule to execute based on other configurations for customisation purposes, etc. ).

Some thoughts on the costs of software

The cost of software can be divided in a couple of categories. It's not necessarily getting better with new emerging technologies (it's just toooo confusing). But following a couple of principles, you should be able to make up your mind soon enough and go for it. ( Don't get enthusiastic with enthusiasm of others in the next big thing that is going to change the world ). To categorise some costs in software:

- Procurement ( licensing and support )
- In-house development and maintenance
- Open Source adaptation
- Mixing requirements with choice
- Stupidity
- Uncertainty

Each of the above topics has a certain 'sillyness' associated with them that may drive costs higher in your company or increase effort for your department.

* Procurement: When I was working for a very large company, the intention was to reduce overall cost by applying a 'one-size-fits-all' strategy in an attempt to reduce licenses. The famous example was taking everybody off MS Outlook ( that came 'free' with MS Office that everybody was using anyway ) and move everybody over to Lotus Notes ( which started to increase the number of licenses required in that area ).
Another item that I think could increase your costs is the question of 'support'. We like to think that 'if this happens or that', we'd better be able to call somebody to come and fix it. This is why you pay 'ongoing support' for some packages or a 'license fee' that comes with a certain degree of support.

In reality, I've seen that it is very rare that companies make a big fuss about problems that occur in the software, unless it sits at the very core of their business ( like billing services or where their customers need services from 3rd parties that break down ). In practice, there is not much pressure to get software updated and they carry on with the bugs anyway. The most bizarre case I heard was with Lotus Notes and IBM, where a patch for a severe defect in Notes was actually 'licensed' to the customer to solve the defect. ( meaning the company started paying for using a patch for software ). That was the weirdest thing I ever heard of.

How this relates to opensource is clear. There is no 'widespread confidence' that opensource can satisfy needs for ongoing support. But according to the above, if the company is never ready to sue the other party anyway, or is never putting pressure, why pay for the license/support and why not take a suitable OS solution? The main reason is that companies like to feel that they have this stick behind the door...

* In-house development and maintenance may become costly if you're not careful about dimming enthusiasm a little bit of enthusiasts that are roaming around continuously looking for new tools and technologies to use, the only reason being is that they have never used it before. To deal with this is a bit tricky, because you certainly do not want to de-motivate staff by refusing every little new thing ( plus the new technology may indeed be promising ). The big idea however must always be that you're in a particular profession that must provide a 'best-of-breed' service to other people. This means going with technologies and ideas that provide most value to your clients.

Other issues involve policies for re-use. Storing regularly used frameworks and making them easily re-usable, developing an in-house knowledge base where people can find information about API's and development experience geared towards working in that particular company in that particular culture.

* Open Source adaptation is a great way to reduce costs for certain projects. Especially for building Java nowadays it is possible to rely on Ant, Eclipse, XDoclet and various other packages that help out in getting software out there quicker and at better quality. What I see, which is related to the first point, is that open source software is hardly ever used in 'production systems'. A shame because many packages are certainly up for it nowadays. The difference here is sensitivity to 'marketing' and the idea that commercial software is better quality and better serviced than opensource. The only difference I see is visibility ( marketing ).

* Mixing requirements with technology is an interesting one. This one is where an analyst or manager writes a Requirements Document that states languages and technologies to be used. This is very wrong and development should fight this any way they can. Language and technology choice must be done in the architecture or design stage.

The manager may have had good experiences with a certain technology and assumes it will equally perform in the next project. Mixing requirements with design/technology choice is a bad thing. Always go back to the writers to find out *why* something has to be developed using tech A or B. There may be reasons for it and there may not be. However, I also advise to be careful, because you may be opening a snake-pit there of 'politics' and 'excuses'. ( don't necessarily do this if you're Junior, find out from Seniors first what their take is on the situation :)

* Stupidity? well... This is where a new IT manager comes in that may or may not have a degree in computer science. With the new manager in place, the first thing he orders is to move all databases from brand A to brand B. Or specifies that all new development must be done in C++ by standard, or Java. Some standards make sense ( size of project, web/gui/cmdline/back-end against favoured solutions is a good chart to develop ). But numbly favouring a particular vendor or technology without any background and requiring that existing systems be changed to comply is a very bad thing to go for. ( Trust me, I've seen it happen :)

* Uncertainty comes up in situations where requirements of size and scope are not entirely clear. (How is the app going to be used and for how many users). Depending on your architecture, you may end up with very unpleasant surprises down the road when you discover that suddenly people thought this would be a great system to roll out to 5 million users, rather than the initially intended 10,000. It's better to have this information up-front, because the chart of 'size against favoured solution' would have brought you different technologies, development patterns or vendors than you may be working with for now. ( word to developers: Be a bit wary about these 'quick' requests and 'no problem' statements. If you're careful and as is said in Holland 'Look the cat out of the tree', you'll be already more or less prepared for these situations ).

Saturday, October 08, 2005

Rules Engines in Java for your business processes

I'm writing a large application that at some point needs to deal with customisable behaviour. I had intended to use a custom package with standard Java classes that people can customise at will. The interface declaration basically exists and then the implementation of verifiers and state changes and "rules" will be left up to the person that is trying to use the system.

At the moment I am in a bit of a doubt between using a rule engine like "Drools" or continue with what I have. Obviously, the benefit of Java classes is that the compilation is more pedantic and there is a better framework for unit testing. At the other side, actually "changing" business rules might be easier and more fun if there are separate 'files' on a file-system somewhere that are loaded by the application at runtime.

I am considering some details when thinking about this in writing a pre-parser for Drools to make using Drools more accessible. There are a couple of disadvantages that I would like to get rid of:

- Having to know Java import statements
- Uncertainty about order in execution if configuration is messed up ( in direct code this is clearer ).
- Dealing with XML code

If you have a business and you grant a discount voucher of $20 when somebody buys at least 3 books for a minimum value of $100 and the customer has 4 children, how would you express this? This is a question of pre-conditions and post-conditions. In Drools it is very easy for a programmer to express, but a business analyst may have more difficulties finding all the right Java classes to include, order of execution, etc. It would be interesting to see exactly how 'much' it is possible to ditch SQL and 'plumbing' code by employing a rules engine in combination with Hibernate... Hibernate can already auto-generate database schemas based on a mapping file. Would it be possible to auto-generate a mapping file too based on another descriptor file that describes an object and its constraints in real-life ( must have address, must have email, has collection of contacts, has collection of items, has shopping cart, etc...? )

To meet some of the short-comings of rule-based systems, merge it with the idea of a petri-net. A petri-net can be used to 'fire' a rule, action or process when pre-conditions are met. The benefit of petri-nets is that it controls concurrency and execution order and knows about required pre-conditions, things that I think are quite difficult when using a rule-engine stand-alone. Add some time-outs in the net and you've got a framework of execution that is better suited to a business process. This framework could be added to an EJB container even, if one wishes... But what is the use of EJB's if we've got a rules engine? :)

A programmer may also add 'resources' with input parameters to this system that could be EJB's, but not necessarily (and would the EJB use a different instance of a rule-engine? ). For instance, a credit-check system could be a particular resource that an analyst might like to use. Then a business person, when creating a new system, could think of launching a pre-configured standard container, load a new set of business rules on it and start an input connector for new data.

The process generated should become a re-usable sub-process with the inputs and outputs declared, more or less like a white/black-box. This supports the idea of multi-level application overviews. Detail is only important when you need to know it. Dependency resolutions here should allow someone to manage changes. Software development than doesn't really need to think about NullPointers in plumbing code so much. I hope it would eliminate a lot of plumbing that is required nowadays.


A rules engine instance is loaded with a business process. This uses files that can be changed easily later. A petri-net is used to control execution, dependency of other information, process concurrency and time-outs. An analyst creates a new Entity. Based on the description of that entity, a Java class file is generated, a mapping file for Hibernate is generated and a full database schema is generated based on all these entities together ( relationships are declared within the declaration ). The only thing that is still necessary is plumbing code between the 'model' layer and some user interface. Some parts there can also be auto-generated ( Struts Forms <-> data objects for instance ).

Interesting concept and idea! Let me know if you wish to do anything with this, I'd be glad to assist!

Thursday, October 06, 2005

Communication at the heart of project success

Effective communication within a project is not often identified as *the key factor* in the success of a project. This article shows why ineffective communication has such a high detrimental impact and a high-level overview of the scope of effective communication.

Effective communication means sharing or stimulating the following:

- Maintaining clear targets and objectives, even when these change
- Good understanding of pr scope and constraints
- Feeling of recognition and being part of a team
- Prevention of unnecessary or doubled work
- Knowledge who is doing what and a constant reminder of time remaining
- The notion that the current work undertaken is useful for others
- Sharing experiences, knowledge to support work of others
- Gauging levels of experience and knowledge with others in new teams
- Let others know you are aware of your roles & responsibilities
- Let others know what your roles and responsibilities are

There are severe side-effects on motivation if communication does not take place. When communication is withheld consistently, or openness for communication is reduced, motivation may drop to an absolute zero in time of a few weeks. Drops in motivation result in "chatter" in a project, which is sometimes viewed as "unnecessary communication" by managers.

Since lack of *effective* communication results in poor work, which increases the "chatter" and unhappy faces. Unfortunately some managers respond to these symptoms ( side-effects ) by further reducing communication in an effort to increase productivity by reducing the chatter. This results in either:

1. The deliverables are made on time, but everybody is unhappy and time is needed to regain confidence.

2. The deliverables are not made on time and everybody is further de-motivated.

So, the best strategy is always: Try to find the actual problem points in the communication line. Some questions may help to identify this:

- Is there lack of direction?
- Is there uncertainty about how or what to undertake?
- Are there (slumbering) conflicts within the team?
- Do meetings happen and are these effective?
- Is email used effectively to resolve problems?
- Are people highly opiniated or ego-centric and do they hardly ever come to a conclusion? ( concessions )

When the problem point(s) is (are) identified, attack the source where it exists.

Always attempt to maintain a team that communicates openly, as frequently as needed, sharing information as efficiently as possible, and communicate with respect for one another.

The Mind Is Radial

This article challenges you to think about how we generally accept to learn new material. What I shall call "linear learning" is the process of reading a book from start to finish. What I shall call "radial learning" is the process of reading the table of contents of a book and then in phases, read more and more details of each section.

Practical, hands-on learning for instance has more radial properties, since the material is researched or explored from a problem perspective. The problem plane for the student contains certain 'gaps' that require a resolution.

When people read a book with the objective to learn or to consume information, the information is grouped into chapters, where the book as a whole is the content in a particular context or perspective. It is impossible to read the book in one blink of an eye, so how can one most effectively read a book with the objective to understand and remember all of the contents?

Consider reading 'radially' into the subject using the table of contents. Skim pages, chapters, paragraphs to get a high-level overview of the contents. Write down any questions that develop while reading and do not get immediately distracted, focus on the current context, but also remember not to dig too deep into the detail. As time progresses, the information you have is more complete and you will be much better able to index the information inside your brain, leaving unimportant details out as they can be re-read later at any time without effort.

The way how the brain stores information approximates a kind of network. Likewise information gets connected, which is the process how we recognise similarities. One cannot think as the brain as a bag of unordered information where a hand digs in to retrieve the item when we need it. In order to 'populate' the brain more effectively, follow the same pattern as the network is organised.

This can also be applied to other topics, like conversations. Focus on global objectives, global points before digging into detail.

Try it and let me know :).