In my long career as a software engineer I have been exposed to many different concepts and ideas, and I have seen several different ways in which some of these concepts and ideas can be implemented. Some of these implementations have been neat and efficient while others, it is sad to say, have been sloppy and barely workable. The authors of these second-rate offerings seem to think that they can churn out any old code without any regard for old-fashioned quality and pride of workmanship. Their only criterion seems to be
"if it works then it must be OK".
These dinosaurs fail to realise that just because something works it may not actually be 'OK' in today's fast moving world, especially when compared against a rival offering. A steam engine may actually work, but how does it compare against the internal combustion engine? This is the difference between 'State of the Art' and 'State of the Ark'. A system may be described using all the latest space-age buzzwords, but if the implementation is still based on stone-age practices then I am afraid that it is still a stone-age system.
One such concept that I have concentrated my efforts on recently has been the 3-Tier architecture. Among the articles I have published on the subject are:
To summarise, software is comprised of code that can be broken down into 3 distinct areas:
In a 1-tier system all the code for these 3 areas is contained within a single component (sometimes referred to as a 'Fat Client'), whereas in a 3-tier system the code for each area is contained within a separate component. The advantages of the 3-tier architecture are:
I have actually used it to build software in UNIFACE, so I can vouch for its efficacy. I have also converted my demonstration application which contains 150+ components from 2-tier into 3-tier. Both variations can be downloaded from my Building Blocks page.
Whenever I see somebody else's implementation of the 3-tier architecture I cannot help but compare it against my own. I particularly look at the development infrastructure, 'under the hood' as it were, as this gives an instant indication of the abilities of the development staff. I ask questions such as:
In recent years I have come across several implementations of the 3-tier architecture which I can only describe as less than satisfactory. Some of them have been so bad that they have actually made me recoil in horror. In these cases I feel the need to search out a scrubbing brush, rubber gloves and a bottle of disinfectant. It amazes me that people can produce work of such poor quality and actually get away with it! Where did these people learn their craft - from the back of a packet of breakfast cereal? They seem to be saying
"I have implemented the 3-tier architecture - how clever I am". As far as I am concerned if they have made any of the mistakes which are documented in this article then it is anything but their cleverness which is being broadcast.
It amazes me when I see that some developers actually feel it necessary to create their own components for the Data Access layer. This shows a total lack of understanding of the following points:
The purpose of the Data Access layer is to perform all communication with the physical database so that if the database management system (DBMS) is ever changed it is only this layer that need be modified.
With UNIFACE the procedure for changing the DBMS from one to another is ridiculously simple - edit your assignment file, find the path statement to your database and change the database driver mnemonic. For example, an assignment file may contain the following:
$DEF = SYB:mydata|myname|mypass
which indicates that all entities on the $DEF path are to be directed to the SYBASE (driver mnemonic 'SYB') database. If this is changed to the following:
$DEF = ORA:mydata|myname|mypass
this indicates that the database is now ORACLE (driver mnemonic 'ORA').
At runtime whenever any I/O operation is requested for a database table the UNIFACE kernel will use the path-to-driver statement in the assignment file to identify which database driver is to be used for that operation.
This means that all the functions of the Data Access layer within UNIFACE are already being carried out by the database driver, therefore there is absolutely no necessity to create your own component for this purpose. If you do create your own component then the effort will be a complete waste of time as the method to change from one DBMS to another will still be to change from one database driver to another.
All Compuware documentation on the 3 Tier architecture clearly shows that components need only be constructed for the Presentation and Business layers - the requirements of the Data Access layer are already satisfied by the database driver.
One of the rules of the 3-Tier architecture is that a Presentation Layer component must not perform any I/O with the physical database - it must always go through the Business Layer which in turn goes through the Data Access layer. Some developers seem to think that this means that you cannot use the application model in Presentation Layer components as this is what is used to communicates with the physical database. I totally disagree with this idea. It is not the application model itself that communicates with the physical database but the
read command in the <read> trigger, the
write command in the <write> trigger, and the
delete command in the <delete> trigger.
With that in mind it is possible to write a Presentation Layer component that does not communicate with the physical database, but which still uses an application model, by employing any of the following methods:-
deletecommands within all Presentation Layer components.
deletecommands can be removed from the relevant triggers.
Of the 3 methods I personally prefer the last one as it provides much more flexibility. The 2 applications models I use are called the BAM (Business Application Model) and the PAM (Presentation Application Model). Among the advantages of having a separate PAM are:-
This may not sound like much to the uninitiated, but it does provide the ability to mask any peculiarities in the physical database from the Presentation Layer components. Thus the PAM can provide a more logical view of the data when building components that interact with the user. In UNIFACE the differences between the two models can be handled by mapping options that can be defined in the Document Type Definitions (DTDs) for each XML stream.
Having shown that it is eminently possible to use an application model in the construction of Presentation Layer components without violating the rules of the 3-tier architecture, I will now explain why I think it is a stupid idea to build such components without the benefit of an application model.
The difference between a 3rd Generation Language (3GL) and a 4th Generation Language (4GL) is that a 4GL uses an application model while a 3GL does not. In the first 4GL that I came across in the early 1980s the application model was actually called a Data Dictionary. This provides a single central definition of all the files and database tables that can be used by the application. It identifies all the relationships between tables; it identifies all the fields within each table; it identifies which fields are part of any primary and candidate keys; it identifies the data type of each field (string, number, date, time, etc); it identifies the maximum size of each field; it identifies whether a field is optional or mandatory; it can also identify valid patterns for data entry. With the UNIFACE application model it is also possible to define default trigger code for both entities and fields which is automatically inherited by any component in which they are incorporated.
Application models and data dictionaries were not invented as an academic exercise - they were created in order to boost developer productivity. The ability to have a single central definition of all the elements within the database which can then be referenced by any number of components provides the following benefits:-
UNIFACE components need entities and fields in order to work. If you do not access entities and fields from an application model then you must create dummy entities and dummy fields within each component otherwise they simply will not work. By manually creating a local copy of what should already exist in a central application model you are greatly increasing both the time required to develop each component and the possibility for errors due to incorrect definitions.
There are other major disadvantages when using dummy entities instead of application model entities:
This means that the developer cannot use the standard functionality within UNIFACE to deal with these objects. He must instead insert his own code, which again takes additional time and opens up the possibility for more developer-induced errors creeping into the software.
This wasted effort is bad enough if it is confined to components within the Presentation Layer, but just imagine how much worse it would be if none of the components within the Business Layer were also allowed to access the application model. This actually happened in one company where the developers had erroneously decided that they needed their own components for the Data Access layer. They decided that only the Data Access components could use the application model therefore the Presentation and Business components had to be constructed using dummy entities and fields. I saw it but I could not believe it!
By not using that part of UNIFACE which makes it a 4GL these people are effectively cutting themselves off from the productivity gains of using a 4GL in the first place. In that case why are they bothering with a 4GL at all? Why don't they complete the backward step and go back to using an old fashioned 3GL for their development? If they cannot use the development language in the way that it was designed to be used they why are they wasting their time using it?
Before the introduction of XML streams in version 7.2.06 one common way of communicating data between components was to use lists. Refer to Working with Lists for more details. Using this facility it is possible to copy all the values from an occurrence into a list with a single command, such as:
putlistitems/occ $$list, "source_entity"
The list can be passed to another component, which can then transfer all the values to an occurrence with a single command such as:
getlistitems/occ $$list, "target_entity"
Although with this mechanism it is possible for the source and target entities to have different names, there are the following restrictions:
getlistitems. Any field in the target entity which does not have a matching entry in the list will be unaffected by
The only way to deal with multiple occurrences and multiple entities is to construct a compound list, or a 'list within a list'. This is basically a two-step process:
putlistitems/occ $$list1, "entity".
putitem/id $$list2, "id", $$list1.
The value for 'id' needs to be unique, so it should be constructed using something like the entity name plus the occurrence number.
Whatever code is used to construct the compound list must have a complimentary set of code to deconstruct it in the receiving component. Although this method can deal with multiple occurrences and multiple entities it still suffers from the following restrictions:
There are other problems with lists arising from the situation where a list is first constructed in the Business layer, passed to the Presentation layer where the user can modify or delete existing occurrences and insert new occurrences, then returned to the Business layer where all these changes must now be applied to the database:
getlistitemsto identify whether an occurrence was originally retrieved from the database or has been created by the user. This means that extra code is required in the business component to determine whether an insert or an update is required.
putlistitemscommand. Amongst the methods I have seen to deal with this situation are:
Compared to the complications that can arise from using lists, the use of XML streams is a walk in the park. Consider the following points:
$OCCSTATUSattribute for each occurrence to indicate whether it originally came from the database, was inserted by the user, or has been marked for deletion.
RETRIEVE/RECONNECTstatement will automatically work out which occurrences are to be modified, which are to be inserted, and which are to be deleted.
As you can see, the use of XML streams to pass data between the Presentation and Business layers instantly eliminates the deficiencies of lists, and also throws in a few more advantages for good measure. Those who still insist on using lists are therefore making a rod for their own backs.
If you wish to see sample code that demonstrates how XML streams can be used in a 3-tier system then you can either download the small sample that goes with my article 3 Tiers, 2 Models, and XML Streams, or you can download a complete demonstration application from my Building Blocks page.
When data is passed from the Business layer to the Presentation layer it may need more than just entities and field values. It may also need to populate drop-down lists, radio groups, combo boxes or list boxes. Each of these widgets require what is known as a
VALREP list, which is an associative list of '
name=value' pairs. Some developers seem to think that each Presentation Layer component must obtain all the data that it requires in a single operation on a Business layer component. I totally disagree with this idea. Not only does such a rule not exist in any book on the 3-tier architecture that I have read, but it also creates unnecessary problems:
VALREPlists included), then load it all into the component. Countless hours were spent on trying to develop the code so that the Presentation Layer component could correctly identify that the data stream contained a number of
VALREPlists, plus the ability to then load each
VALREPlist into the correct field. Just when they thought they'd got it right another situation arose which forced them back to the drawing board.
VALREPlist from the single data stream and load it into the correct field. There was no standard procedure to do this, so each component had to be hand-crafted in order to work properly. The amount of time this took was unbelievable, especially when you took into consideration the time taken to detect and fix any bugs in all this hand-crafted code.
In a traditional 2-tier system it is common practice to have a separate operation on a specialist component which will return the specified
VALREP list in a form that can be instantly loaded into the relevant field. This mechanism works just as well in a 3-tier system, so I see no benefit in changing it. In my architecture where I use XML streams I would need additional code to embed the
VALREP list inside the XML stream, then even more code to extract it from the XML stream in the Presentation layer so that it could be loaded into the correct field. To my mind if I already have 5 lines of code that perform the function adequately, why should I waste time trying to replace it with an alternative method that may require 50 or more lines of code? What is the cost? Where is the benefit?
In my long career I have worked with many different languages, different philosophies, and different development environments. I have seen how intelligently-applied rules and standards can be an aid to the software development process while others can be a positive hindrance. As far as I am concerned the primary rule of software development is
"Do what it takes to create the best software as quickly as possible." All other rules are subservient to this rule. In my quest for perfection I am not afraid to question any rule, nor am I afraid to modify or even ignore what I consider to be a bad rule in order to squeeze the last ounce of productivity from whichever software development tool I am using. I have done this with COBOL, I have done this with UNIFACE, and I am currently in the process of doing it with PHP.
Other developers seem deathly afraid of modifying or even questioning the rules they are given. They seem to think that they are cast in stone and must be followed with bind, unquestioning obedience. They don't have the ability to investigate other methods which may be more flexible, more reliable, easier or faster to implement. They may complain that their development process is slow and cumbersome, but they don't know what is causing it or how to fix it.
Some developers seem to have the knack of taking a perfectly plausible concept, then they screw it up with a poor implementation. They invent methods that deliberately slow down the development process and make it much more complex than it should be, then they wonder why it takes longer than they expected. The first thing they usually blame is the development language when in fact the major fault lies between their own ears. When I tell such people that I can create software 10 times faster than they can simply by following a different set of rules and a different set of methods they just don't believe me. When I actually demonstrate my development techniques to them their first remark is, "But we don't do it that way!" They fail to understand that it is "their way" of doing things which is causing the problem. They are following the way of the dinosaur when they should be following the way of the dynamo.
2nd January 2003