Tony Marston's Blog About software development, PHP and OOP

Evolution of the RADICORE framework

Posted on 1st June 2022 by Tony Marston
Introduction
Starting with COBOL
Switching to UNIFACE
Switching to PHP
How using OOP increased my productivity
Conclusion
References
Comments

Introduction

I did not pull the design of my RADICORE framework out of thin air when I started programming in PHP, it was just another iteration of something which I first developed in COBOL in the 1980s and then UNIFACE in the 1990s. I switched to PHP in 2002 when I realised that the future lay in web applications and that UNIFACE was not man enough for the job. PHP was the first language I used which had Object-Oriented capabilities, but despite the lack of formal training in the "rules" of OOP I managed to teach myself enough to create a framework which increased my levels of productivity to such an extent that I judged my efforts to be a success. In the following sections I trace my path from being a junior programmer to the author of a framework that has been used to develop a web-based ERP application that is now used by multi-national corporations on several continents.

Starting with COBOL

When I joined my first development team as a junior COBOL programmer we did not use any framework or code libraries, so every program was written completely from scratch. As I wrote more and more programs I noticed that there was more and more code that was being duplicated. The only way I found to deal with this when writing a new program was to copy the source code of an existing program which was similar, then change all those parts which were different. It was not until I became a senior programmer in a software house that I had the opportunity to start putting this duplicated code into a central library so that I could define it just once and then call it as many times as I liked. Once I had started using this library it had a snowball effect in that I found more and more pieces of code which I could move into a library subroutine. This is now documented in Library of Standard Utilities. I took advantage of an addition to the language by writing Library of Standard COBOL Macros which allowed a single line of code to be expanded into multiple lines during the compilation process. Later on my personal programming standards were adopted as the company's formal COBOL Programming Standards.

By using standard code from a central library it made each programmer more productive as they had less code to write, and it eliminated the possibility of making some common mistakes. One of the common types of mistake that was eliminated was keeping the definition of certain data buffers, such as those for formsfiles and database tables, in line with their physical counterparts. This was taken care of with the COPYGEN utility which took the external definitions and generated text files which could then be added to a copy library so that the buffer definitions could be included into the program at compile time.

One of the first changes I made to what my predecessors had called "best practice" was to change the way in which program errors were reported to make the process "even better". Some junior programmers were to lazy to do anything after an error was detected, so they just executed a STOP RUN or EXIT PROGRAM statement. The problem with this was that it gave absolutely no indication of what the problem was or where it had occurred. The next step was to display an error number before aborting, but this required access to the source code to find out where that error number was coded. The problem with both of these methods was that any files which were open, and this includes the the database, formsfile and any KSAM files which were not explicitly closed in the code would remain open. This posed a problem if a program failed during a database update which included a database lock as the database remained both open AND locked. This required a database administrator to logon and reset the database. The way that one of my predecessors solved this problem was to insist that whenever an error was detected in a subprogram that instead of aborting right then and there that it return control back up the stack to the starting program (where the files were initially opened) so that they could be properly closed. This error procedure was also supposed to include some diagnostic information to make the debugging process easier, but it had one serious flaw. While the MAIN program could open the database before calling any subprograms, each subprogram had the data buffers for each table that it accessed defined within its own WORKING-STORAGE section, but when that subprogram exits its WORKING-STORAGE area is lost. This was a problem because if an error occurred while accessing a database table then the database system inserted some diagnostic information into that buffer, but when the subprogram returned control to the place from which it had been called then this diagnostic information was lost. This to me was unsatisfactory, so I came up with a better solution which involved the following steps:

This error report showed what had gone wrong and where it had gone wrong using all the information that was available in the communication areas. As it had access to the details for all open files it could close them before terminating. The database communication area included any current lock descriptors, so any locks could be released before the database was closed. Because of the extra details now included in all error reports this single utility helped reduce the time needed to fix bugs.

Up until a particular project in 1985 it was common practice to develop each new application from scratch. This involved creating a single program which had numerous subprograms to deal with each user transaction (use case). This then required a hierarchy of menu screens which listed the options which were available in the application and allowed the user to choose one. As the screen size was fixed the number of options in each page was limited. An option could either be a user transaction or another sub-menu. This required that each menu page be hard-coded, which meant that all the menu pages had to be defined and compiled up front, and any changes to these menus required changes to some code which in turn required that the changed code be recompiled and then re-linked into a new version of the program file. Although the application had a logon screen which only authorised users could pass through, every user always saw every option that existed on a menu screen which meant that they could select it. A simple Access Control List (ACL) identified those options which a particular user was allowed to access, but this was only checked after that option was activated. This led to the annoying situation where a user could see an option, but he was only told after he selected it that the option was disallowed.

This all changed in 1986 when a new client insisted on a system of dynamic menus where the menu screens could be changed on-the-fly and where the user could only see those options which he was allowed to access. This required a completely new design, so I spent a few hours on the following Sunday in designing a database structure which could support all these requirements. I began coding it on the Monday, and by Friday it was complete. The main points of this design were:

The client was satisfied that this design met all his requirements, but over time the following enhancements were made:

After that particular client project had ended my manager, who was just as impressed with my efforts as the client, decided to make this new piece of software the company standard for all future projects as it instantly increased everyone's productivity by removing the need to write a significant amount of code from scratch. This piece of software is documented in the following:

Switching to UNIFACE

In the 1990s my employer switched to UNIFACE, so I rebuilt this framework that new language. I first rebuilt the MENU database, then rebuilt the components which maintained those tables. After this I made adjustments and additions to incorporate the new features that the language offered. UNIFACE uses a proprietary Integrated Development Environment (IDE) which had a database Repository which consisted of an Application Model from which you could build Form Components for each use case. Inside the Application Model you defined entities (tables), fields (columns), keys (indexes) and relationships. You then ran a process which exported an entity's details from the Application Model to generate the CREATE TABLE script using those details. Using the built-in Graphical Form Painter (GFP) you drew an area on the form which you then related to an entity in the Application Model, and within this area you painted fields which belonged to that entity. After compiling the form you could run it using the standard function keys to read, write, update and delete occurrences (rows) in that database table. You never had to write any SQL queries as they were generated automatically by the built-in database driver, with a separate driver for each supported DBMS.

While an advantage with UNIFACE was that you did not have to write any SQL queries, the disadvantage was that you could not write any SQL queries. It was not possible to perform any JOIN operations, so instead on a single query where you could SELECT ... FROM tableA LEFT JOIN table B ON (...) you had to define an entity frame for TableB inside the entity frame for TableA in the Graphical Form Painter, then at runtime UNIFACE would read a set of occurrences (rows) from TableA, then for each of those rows it would issue a separate read operation for one row from TableB. This is known as the N+1 SELECT Problem and is grossly inefficient, and the only solution was to create a database view for the complex query and define the view in the Application Model. This would then allow UNIFACE to treat the view as if it were an ordinary table so a single read operation would cause the database to execute a JOIN without UNIFACE being aware of it.

I started with UNIFACE Version 5 which supported a 2-Tier Architecture with its form components (which combined both the GUI and the business rules) and its built-in database drivers. UNIFACE Version 7 supported the 3-Tier Architecture by moving the business rules into separate components called entity services, which then allowed a single entity service to be shared by multiple GUI components. It also introduced non-modal forms and component templates. There is a separate page on the component templates which I built into my UNIFACE Framework.

Whilst my early projects with UNIFACE were all client/server, in 1999 I joined a team which was developing a web-based application using recent additions to the language. Unfortunately this was a total disaster as their design was centered around all the latest buzzwords which unfortunately seemed to exclude "efficiency" and "practicality". It was so inefficient that after 6 months of prototyping it took 6 developers a total of 2 weeks to produce the first list screen and a selection screen. Over time they managed to reduce this to 1 developer for 2 weeks, but as I was used to building components in hours instead of weeks I was not impressed. Neither was the client as shortly afterwards the entire project was cancelled as they could see that it would overrun both the budget and the timescales by a HUGE margin. I wrote about this failure in UNIFACE and the N-Tier Architecture.

Switching to PHP

I was very unimpressed with the way that UNIFACE produced web pages, so I decided to switch to a more effective language. I chose PHP as it was designed specifically for building web-based database applications, and I could download and install all the software I needed - PHP, Apache and MySQL - onto my home PC for free. I read the PHP manual, found some online tutorials, proved that it could do what I wanted it to do, then began to rebuild my entire development framework. I had only two objectives to start with:

I found that implementing the 3-Tier Architecture using objects was surprisingly easy as programming with objects is automatically 2 tier to begin with. This is because after creating a class for a business/domain component with properties and methods you must also have a separate component which instantiates that class into an object and then calls whatever methods are required. The business/domain object is what I now refer to as a Model in my infrastructure while the component which instantiates it and calls its methods is what I refer to as a Controller. In my first implementation I had methods within each table class, which were inherited from an abstract table class, to build and execute the SQL queries, but when MySQL version 4.1 was released I needed a mechanism to use either the original "mysql_" functions or the improved "mysqli_" functions. All I had to do was to create a separate database class for each different set of functions then modify the abstract table class so that the methods which used to perform the database access instead created the relevant database object then passed control to this object. This made it very easy later to add support for additional DBMS engines, starting with PostgreSQL, then Oracle and later SQL Server.

I developed a separate component which extracted the data from a Business layer component, copied it into an XML document, loaded a nominated XSL stylesheet, then ran an XSL transformation to produce a complete web page. This enabled me to split the Presentation layer into two parts giving me a Controller and a View which you should recognise as being parts of the Model-View-Controller design pattern. I originally started with custom XSL stylesheets for each HTML screen, but after some refactoring I managed to produce a small set of reusable XSL stylesheets which can now be used to generate any HTML screen that the application requires. I currently have 12 XSL stylesheets which I have used to create over 3,500 web pages in my main ERP application. This means that I do not have any PHP code in my software which spits out any HTML. The generation of all HTML is not performed until the very last step of each page controller, which is after the model object(s) have finished processing.

Among the other early design decisions which I made were the following:

  1. As UNIFACE created a separate entity service in the Business layer for each entity (table) in the Application Model it seemed perfectly obvious to me that I should follow suit by creating a separate class for each table in my database.

    I have been told by several OO "experts" that having a separate class for each database table is not good OO, but as they can neither identify any practical problems with this approach nor prove that any alternatives are in any way "better" I choose to ignore them.

  2. As the only operations that can be performed on a database table are Create, Read, Update and Delete (CRUD) I decided to support these four methods in each table class using a standard set of method names - insertRecord(), getData(), updateRecord() and deleteRecord().
  3. As I was used to passing complete rows of data from one component to another I decided against the idea of defining each database column as a separate property in each class and use a single property called $fieldarray instead. I thus avoided the need to require a collection of getters and setters for each column. This single property could also contain as many or as few columns as I liked, and as many or as few rows as I liked. When I saw the first example of using getters and setters I thought to myself "What a stupid idea! Why should I waste time in unpicking the $_POST array into its component parts and then insert them one column at a time when I can pass in the entire array in one fell swoop?"
  4. Instead of building a single Controller component to handle all the possible use cases which may be required for a particular database table I decided to create a set of smaller controllers each of which handled just a single use case. The reasoning for this is described in Component Design - Large and Complex vs. Small and Simple
  5. As each form component in UNIFACE could contain more than one entity frame which enabled it to display data from different tables I followed this idea by allowing my screens to have more than one zone with different names such as OUTER and INNER, or OUTER, MIDDLE and INNER. This meant creating Controllers and XSL stylesheets to handle each combination of different zones. I was later told that I was violating the rules of MVC as a Controller was only allowed to communicate with a single Model, but as my accuser could not provide any evidence that this rule was actually documented anywhere I ignored him. He could not argue that it was impossible as I had already proved that it was not. He could not argue that it created problems as I had already proved that it did not. Just because he had not seen it done before did not mean that it should not be done.
  6. After I had finished the components for the first table I copied the code into a new set of components to deal with a second table. As you can imagine this produced a great deal of duplicated code which is not a good idea. I needed a way to replace the code that was being duplicated with code that could be shared, so this is where I made use of inheritance. I created an abstract table class to contain all the sharable code, then modified each concrete table class to inherit from this abstract class. I then moved code from each concrete table class into the abstract table class until all that was left was code that was unique to each table class. This was limited to the following:
  7. Every database programmer knows that all user input must be validated before it is inserted or updated in the database otherwise the operation will fail. The correct procedure is to validate it in your code and to send it back to the user with a suitable error message if it is wrong. Due to the fact that each table class knows the structure of the database table which it supports (the $fieldspec array), and that all the application data exists in a single $fieldarray property, I found it exceeding simple to create a standard validation object which verifies that the data for a column in $fieldarray conforms to its specifications in the $fieldspec array. This handles what I call primary validation.
  8. Secondary validation is needed to handle all non-standard business rules, those which are specific to a particular table and which cannot be handled by standard code within the framework. This requires that the processing flow as contained in the abstract class be interrupted in order to execute some custom code in the concrete class. To deal with this I first identified all the places where the standard processing flow could be interrupted, then inserted a call to a customisable method. Although this is defined in the abstract class it does not contain any code so when it is called it does nothing. In order to execute some arbitrary code this empty method needs to be copied into a concrete class where it can be filled with whatever code is necessary. At runtime the method in the subclass will override the method in the superclass. I later discovered that what I had done was to implement "hook" methods which are a integral part of the Template Method Pattern.
  9. After I had created different sets of Controllers for the first two tables I noticed that yet again there was a lot of duplicated code. Each Controller called a particular set of methods on a particular model (table) object where the object was instantiated from a hard-coded class (table) name. By comparing the two scripts I noticed that the only difference was the class/table name, so I wondered if it was possible to instantiate an object from a class name in a variable instead of a hard-coded literal. I quickly discovered that it was, and this enabled me to create scripts such as the following:

    (a) A component script:

    <?php
    $table_id = "person";                      // identify the Model
    $screen   = 'person.detail.screen.inc';    // identify the View
    require 'std.enquire1.inc';                // activate the Controller
    ?>
    

    (b) A controller script:

    <?php
    ...
    require "classes/$table_id.class.inc";
    $dbobject = new $table_id;
    $fieldarray = $dbobject->getData($where);
    ...
    ?>
    

    This is only possible because every table class contains the same set of methods which are inherited from the abstract table class. This means that the same method will produce different results depending on which table it is operated upon. Not only do these controller scripts not contain any hard-coded table names, by loading and retrieving all application data via a single $fieldarray variable I also avoid any need for hard-coded column names. This is a perfect example of loose coupling as I can make changes to the contents of $fieldarray without have to modify any method signatures.

    A really observant programmer should see that I can re-use this controller with ANY table in my database simply by changing the name of the $table_id variable in the controller script. This shows how polymorphism can be used in conjunction with dependency injection to provide large amounts of reusable code.

  10. With UNIFACE the developer has to maintain all entity (table) details in the Application Model before being able to generate CREATE TABLE scripts to build the physical database, but I decided to do the reverse. This was not an up-front decision as it evolved over time. In my very first attempt I passed the entire contents of the $_POST array through the Model and into the Data Access Object (DAO), but the INSERT query failed because the SUBMIT button did not exist on that table. To get around this I added a variable called $fieldlist into which I coded a list of field (column) names when then enabled me to exclude any field names which were not in this list. I then added code to validate that the value for each field was consistent with the column definition in the table, but after doing this for several tables I realised that I was writing the same code over and over again. As the range of datatypes which can be applied to a column is quite limited I realised that I could change the simple $fieldlist array into a $fieldspec array, which was a multi-dimensional array where the first level contained the field name and the second level contained an array of specifications for that field. In was then quite straightforward to write a routine which checked that each field's value in $fieldarray matched its specifications in the $fieldspec array. After a while of building this array in each table's class by hand I realised that I could automate it, just as I had done with the table definitions in my COBOL COPYGEN utility.

    Instead of extracting a table's details from the database schema and writing them directly to a disk file I decided to store them in an intermediate database called a Data Dictionary as I wanted to give myself the opportunity to possibly enhance this data before I made it available to any PHP scripts. After designing the dictionary database I then wrote an IMPORT function to read from the database schema and write to the dictionary database. This was followed by an EXPORT function to transfer data from the dictionary database into disk files which could be accessed by the application. I decided to write the dictionary data into a separate <tablename>.dict.inc file so that I did not have to overwrite the <tablename>.class.inc file. Although the class file is initially empty except for the constructor method it will usually be amended afterwards by the developer to include any "hook" methods for various business rules. This means that the EXPORT function will not overwrite any existing class file, so if a table's structure is amended in the future both the IMPORT and EXPORT functions can be run again without affecting the class file. This means that the software's view of the database structure can always be kept in sync with the physical structure, thus removing the need for that abomination called an Object-Relational Mapper (ORM).

    I have been told by several OO "experts" that having my database schema known to my domain layer is wrong, but how on earth could the Domain/Business layer validate any user data before it is handed over to the Data Access layer? To me that is nothing more than a nonsensical rule that causes more problems than it solves, so I choose to ignore it.

  11. Even as far back as my COBOL days, after having written numerous programs to perform different sets of operations on different database tables, I began to see different patterns beginning to emerge. After writing a program which operated on TableA it was quite common to be asked to write another program which performed exactly the same operations on TableB. The only option available at the time was to take the source code for tableA, then copy it and manually change all the references to TableA and its column names to TableB and its column names. There was very little code that could actually be shared as the data buffers for both the screen structure and the table structure had to be hard-coded. Even though these buffers could be generated by the COPYGEN utility the code to move data between these buffers had to be written by hand.

    UNIFACE was slightly different as the database table buffer was automatically supplied from the entity definition in the Application Model, and as the same Application Model entities were referenced in the Graphical Form Painter it was easy for UNIFACE to automatically perform the mapping between the screen structure and the database structure. However, while the screen structure still had to be built by hand UNIFACE Version 7 introduced a new facility called component templates which allowed common screen structures to be defined in a central catalog so that new components could be built using one of these templates as a starting point, thus removing the need to write the entire component from scratch.

    In my PHP framework I was able to improve on the idea of component templates by creating my own set of Transaction Patterns which has given me the ability to create working tasks (user transactions or use cases) without having to write any code. This all came about when I examined a large number of transactions which performed similar operations on different database tables and broke down the similarities into the following categories:

    Because of other design decisions which I had made (against the advice of so-called OO "experts", I might add) and the reusable components which I had written to implement those decisions, I found it easy to supply a different reusable component for each of those categories:

    These also represent components in the Model-View-Controller design pattern:

    After having built my Data Dictionary to generate the table class files and the table structure files it was a simple step to add another function to generate the scripts required for each Transaction Pattern. This means that after creating a new table in my database I can run the RADICORE framework and press buttons to perform the following:

    This can all be done in 5 minutes without writing a single line of code - no PHP, no HTML, no SQL. While the initial tasks are basic, because every transaction in the framework uses the Template Method Pattern they will only execute the common invariant code which is defined in the abstract table class. Custom code can be added later into any of the "hook" methods after they are copied from the abstract superclass into the table subclass.

Some of my critics tell me that RADICORE is not a proper framework, but as it meets every description I have encountered (see What is a Framework? for details) I regard their criticisms as being totally invalid.

How using OOP increased my productivity

As shown in What is Object Oriented Programming (OOP)? there have been numerous and varied descriptions of what OOP is supposed to be and what benefits it is supposed to provide, such as:

The power of object-oriented systems lies in their promise of code reuse which will increase productivity, reduce costs and improve software quality.
...
OOP is easier to learn for those new to computer programming than previous approaches, and its approach is often simpler to develop and to maintain, lending itself to more direct analysis, coding, and understanding of complex situations and procedures than other programming methods.

As far as I am concerned any use of an OO language that cannot be shown to provide these benefits is a failure. Having been designing and building database applications for 40 years using a variety of different programming languages I feel well qualified to judge whether one language/paradigm is better that another. By "better" I mean the ability to produce cost-effective software with more features, shorter development times and lower costs. Having built hundreds of components in each language I could easily determine the average development times:

How did I achieve this significant improvement in productivity? I did not follow the rules of Object Oriented Programming (OOP), Object Oriented Design (OOD), Domain Driven Design (DDD) or any other formalised design process, nor did I try to implement any Design Patterns or follow the SOLID principles. Why not? Simply because I did not know they existed. Instead I learned to combine my prior experience of building dozens of different database applications with the principles of OOP which I read from the PHP manual and various online tutorials. Despite not following other people's "best practices" I managed to produce a development framework which provided large amounts of reusable components.

Another boost for my productivity came from the fact that with web applications all user screens are nothing but text files which do not have to be compiled before being sent to the client's browser. With both COBOL and UNIFACE each screen had to be constructed by hand using special software before it could be compiled, and it was this compiled version that was sent to the client device. Text files, on the other hand, do not need special software as any programming language can output a string of text. They also do not need to be compiled as the client's browser will accept a string and then render it into a user-friendly screen using its own internal logic. The only problem with this was that at the turn of the century different browsers could produce different output from the same HTML document, and this led to what became known as the browser wars.

Each web page is constructed from a complete HTML document, and as this has to be constructed from scratch each time it is possible for the same program to produce a different screen each time it is run. This means that the structure of each each page it produces can be dynamic and varied instead of being restricted to a fixed compiled structure with the only difference being the data content. Although each page has to be constructed from scratch it is possible to to shorten this process by using a template engine. This usually requires the page structure to be defined in a template, and at runtime data is loaded into this template before the result is sent to the browser. As I had already become familiar with XSL Stylesheets which can transform an XML document into HTML I decided to stick with software that was written to international standards rather than something produced by a novice programmer. This turned out to be a VERY good idea. Originally I created a separate stylesheet for each web page, but with some refactoring I created a small set of 12 (twelve) reusable XSL stylesheets which can produce any number of different web pages. While the screen structure is built into the stylesheet, the identity and position of the application data which goes into that structure is provided separate in a screen structure file which is then copied into the XML file where it can be processed during the XSL transformation.

The use of a small number of XSL stylesheets made it possible to allow the framework to produce mobile-friendly HTML documents which can respond to the screen size on the user's device, be it PC, tablet or smart phone, in a very short space of time using the BOOTSTRAP library. It took me just 1 (one) month to add this capability into my framework, which then meant that all 3,500 web pages in my main ERP application instantly became mobile-friendly.

I am a pragmatist, not a dogmatist, which means that I judge whether my methods are successful or not based on the results which I achieve. A dogmatist, on the other hand, will insist on blindly following a set of rules, or a particular interpretation of those rules, and automatically assume that their results will be acceptable. This to me is a false assumption. Writing code which is acceptable to other programmers is not the aim of the game, it is writing code which is acceptable to the paying customer. If I can achieve significantly higher levels of productivity by breaking someone's precious rules then it proves that it is their rules which are worse than mine. Any methodology which fulfills the promises made for OOP can be regarded as excellent while everything else can be regarded as excrement, poop, faeces, dung or crap.

If you think that my claims of increased productivity are false and that you can do better with your framework and your methodologies then I suggest you prove it by taking this challenge. If you cannot achieve in 5 minutes what I can, then you have failed.

Conclusion

I have always maintained that Programming is an art, not a science which means that unless you have the basic talent for an artistic endeavour you will never be much good at it. You cannot give a novice a book called "Piano Playing for Dummies" and expect them to become a concert pianist just by following a few simple rules. Similarly you cannot give a novice a list of rules from an "Object-Oriented Programming for Dummies" book and expect him to become an ace programmer. A talented programmer will instinctively write good code while an untalented novice will always produce something that resembles FORTRAN. Following rules blindly is never a good idea as you will always end up being a Cargo Cult Programmer. Following bad rules blindly is even worse. A talented programmer will only follow those rules which are appropriate for the current task. An untalented novice simply does not have the intellect to determine if something is appropriate or not and will end up by adding unnecessary complexity to his code.

I rebuilt my previous COBOL and UNIFACE frameworks in a OO language in order to take advantage of the new ideas which that paradigm provided, namely Encapsulation, Inheritance and Polymorphism. As I wrote the code for more and more programs I saw myself writing similar chunks of code over and over again, so I looked for ways to make this code reusable instead of having to duplicate it. I moved this repeating code into an abstract table class which could then be inherited by every concrete table class. Other code I put into subroutines, such as my data validation object which performs all primary validation for every table's data. Secondary validation, which is unique to each individual table, requires custom code in each concrete table class. Because of my use of an abstract class I could easily implement the Template Method Pattern which then allowed me to call this custom code on demand using a series of "hook" methods. All programs which produce HTML screens do so by calling a standard View object which generate pages using a small library of reusable XSL stylesheets. I did all of this without knowing anything about the "rules" of OOP which had not been widely publicised at that time. Now that I know these rules exist I choose to ignore them for the simple reason that my code already works, and changing it to follow these rules would make it worse, not better. I am therefore a follower of the rule if it ain't broke, don't fix it. I do not follow the teachings of others just to be consistent, I choose to innovate, not imitate

My critics keep telling me that my implementation of the principles of OOP is wrong, but how can that be if my levels of productivity are greater than theirs? They say that the proof of the pudding id in the eating, so if a programmer's job is to create cost-effective software with more features in less time and therefore at a lower cost then any tool which helps him to achieve that aim must be considered to be better than one which is slow and cumbersome. I have yet to see any proof that the followers of these artificial rules can create a framework which rivals RADICORE and produces anywhere near the same levels of productivity, so I can only conclude that my results are better than theirs.

Here endeth the lesson. Don't applaud, just throw money.


References

The following articles express my heretical views on the topic of OOP:

The following articles describe aspects of my framework:


Comments

counter