Tony Marston's Blog About software development, PHP and OOP

Your rules are RUBBISH!

Posted on 1st April 2018 by Tony Marston

Amended on 10th January 2021

Genuine Rules
Code should be human-readable
Keep It Simple, Stupid! (KISS)
Don't Repeat Yourself (DRY)
Structured code is better than unstructured code
Structured data is better than unstructured data
The structure of the code should follow the structure of the database
Properly layered software is better than giant monoliths
High Cohesion
Loose Coupling
Learning vs Being Taught
Rubbish Rules from mis-interpretation
Hungarian Notation
Technical Primary Keys
Single Responsibility Principle (SRP)
Separation of Concerns (SoC)
Open/Closed Principle (OCP)
Liskov Substitution Principle (LSP)
Interface Segregation Principle (ISP)
Dependency Inversion Principle (DIP)
Composite Reuse Principle (CRP)
Rubbish Rules from mis-invention
You must use OOD before you attempt OOP
Having a separate class for each database table is not good OO
You must not have the same data names in every layer
Your database schema is known to your domain layer
You must use an Object-Relational Mapper
Your design is centered around data instead of functions
You must use the constructor to populate an object with valid data
You must have a separate class property for each table column
You must access each class property using getters and setters
An object can only deal with a single database row
You must validate all data before it is put into the Model
You must validate a value within its setter
In MVC a Controller can only speak to one Model
In MVC a Model can only have one Controller
You must define a collection of Finder methods to access your data
You must create a separate method for each use case
You should use a Front Controller
You are using the wrong design patterns
Encapsulating an entity does not result in a single class
Your code uses global variables
Your code uses singletons
Who decides when a practice is best?
Amendment History


Ever since I started publishing articles on my website I have been told by many people that my approach to OOP is totally wrong, my code is crap, that I am idiot and that I should go back to school and learn how to do things properly. I flatly refuse to follow this "advice" as I consider it to be seriously flawed, and by following it the quality of my work would be seriously compromised. I had over 20 years of experience in the design and development of enterprise applications before I switched to using an OO-capable language in 2002, and during this time I was responsible for writing the company development standards in two programming languages, COBOL and UNIFACE. The "rules" or "standards" or "best practices" that I followed were based on years of practical experience and not some theories that I read in a book but never implemented.

In my early days I was exposed to different sets of standards in different development teams, so I quickly learned that there was no single set of standards which was applicable to everybody. I also noticed that the details in one team's standards completely contradicted that found in another's, so I knew that these standards were not produced by some global wisdom but by local preferences and prejudices. I also noticed that some of these local preferences were based on misconceptions or outdated ideas, so the idea that they represented "best practice" just did not hold water. I remember a particular instance in 1981 where I found that the project standards were actually preventing me from writing effective software, so I abandoned them and followed my own instincts. When my program was examined as part of a project audit I still remember the verdict by the auditor, a senior programmer from a different team:

This program does not conform to the project standards. However, it is the most well written, well structured and well documented program in the whole project.

If the purpose of standards is to help developers write better software, yet I can produce superior software by ignoring them, then what does that say about the quality of those standards? This told me that some standards are worth following while others should be flushed down the toilet. This is why I question every rule and refuse to adopt it unless its benefits can be proved. Being told "Do it this way because I say so" is simply not acceptable.

Genuine Rules

When I became a team leader and could write my own standards I only included that which could be justified and which could be shown to produce better results than the alternatives. I concentrated on the important points and left out all the tiny nit-picking details. These programming "rules" or "standards" can be summed up as follows:

If you examine the above list very carefully you should notice that they only identify what should be done - the objectives - and not how it should be done - the implementation. An experienced programmer will know that a problem can have more that one solution, and each solution may be implemented in more than one way with varying degrees of effectiveness. Unfortunately there are some people out there who know only one method of implementation, and they teach this as the only method. Not only do I think that this is a step too far, by teaching a method which is not as good as some of the alternatives it is in fact a step in the wrong direction. That is why I ignore the rules of implementation set by others, and that is why my software is more cost-effective than theirs.

Learning vs Being Taught

Here is a quote from someone who prefers to remain anonymous:

Some people know only what they have been taught while others know what they have learned.

In case you don't understand what that means here's a for instance: when I was a young boy and started to become attracted to members of the opposite gender I did not have a father to teach me how to behave, all I had was the advice of other boys who, being older and obviously more experienced in such matters, told me just one thing When a girl says "no" she really means "yes". After some disastrous attempts at putting this theory into practice I quickly learned that when a girl says "no" she REALLY means "no".

Some people, having been taught only one way to do something, erroneously assume it is the only way, the right way, to achieve that "something", and to do it any other way is wrong or heretical. These are the thoughts of closed-minded dogmatists. In any field of human endeavour, such as electrical/mechanical/software engineering or aircraft/ship/software design, doing things the same old way over and over again does not lead to progress, it leads to stagnation. True progress can only come from trying something different, from rewriting the rules to produce something which is better. Note that there is no definitive definition of "better" as it can mean different things in different contexts:

I switched to using PHP when I wanted a language that made it easier to build web applications following the disaster I encountered with my previous language. As I played with the language I discovered that it supported this alien concept called Object Oriented Programming, so I needed to know what it meant and how I could use it. The description that I read at the time defined OOP as:

Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Polymorphism, and Inheritance to increase code reuse and decrease code maintenance.

My first task was to take the framework that I had already developed in two prior languages and redevelop it in a third. I worked on the assumption that an OO language is exactly the same as a procedural language except for the addition of encapsulation, inheritance and polymorphism, so it was all down to how well I utilized these features which would tell if my efforts were successful or not. According to the above definition I reckoned that "success" would be directly related to how much reusable code I created, and one method of reusing code that is not available in a procedural language is inheritance. It was immediately obvious to me that in an application that communicates with objects (tables) in a database there should also be corresponding objects (classes) for each of those tables in the software. It was also immediately obvious that code that was common to every database table could be placed into an abstract table class and then inherited by every concrete table class.

Another method of reusing code is via polymorphism, so I developed a library of page controllers which called methods on an unspecified object where the actual object (class) name was specified at runtime. I then ended up with a collection of table classes which could be called from any controller, and a collection of controllers which could be used with any table class.

My new framework, with its increased levels of reusability, allows me to create new application components at a much faster rate than either of its predecessors, and after using and enhancing it for over 15 years I can safely say that it is not at all difficult to maintain. Because of these results I deem my efforts to be a great success, so imagine my surprise when other developers keep telling me that my methods are wrong! They reach this amazing conclusion not by looking at the results which I achieve but by noticing that my methods are different from theirs. They are under the impression that as they were taught only one method that it must be the only method, the right method, and anything which does not conform to the "right" method, which does not obey the same set of rules, must automatically be "wrong". My methodology cannot be wrong for the simple reason that it works, and anyone should be able to tell you that anything that works cannot be wrong just as anything which does not work cannot be right. Instead of my methods being wrong because they follow a different set of rules I would say that the superior results which I achieve point to their methods being wrong. Inferior results are produced by inferior methods which in turn are caused by following inferior rules. In other words their methods are RUBBISH!

The success of my implementation I can only put down to the fact that I did not go on any courses to learn OO theory from so-called "experts", instead I simply read the PHP manual which described the concepts and gave some examples showing how these concepts could be implemented. I then combined these new concepts with my previous experience with database applications, and when I finished I found that I could produce working components at a much faster rate than with either of my two previous languages, which is why I judged my implementation of OOP to be a success. Imagine my surprise when later on I was informed that my results were irrelevant as I was not following the "right" rules. When I studied these rules I could see straight away that they had swapped simplicity for complexity, pragmatism for dogmatism, and that by following these rules I would become less productive instead of more. I therefore decided that these rules were rubbish and chose to ignore them.

I can think of two major reasons for this proliferation of rubbish rules - mis-interpretation and mis-invention.

Rubbish Rules from mis-interpretation

By this I mean where someone has taken a documented principle and comes up with an interpretation which goes far beyond what was originally intended. This may be because the original principle was badly phrased, or because the words used have a different meaning in a different context.

Another common problem with misreading a principle is where it is documented as being beneficial in certain circumstances or where appropriate, but where the implementor forgets this and applies it indiscriminately in all circumstances. Sometimes this is because the implementor does not have the brain capacity to work out if the principle is actually appropriate in his circumstances or not, so he takes the lazy option and implements it anyway.

A less common problem is what I documented years ago as the Reverse Imperative Principle (RIP). This is where a programmer must not only follow a rule he must be seen to be following that rule, so he inserts the relevant code as proof without realising that the code actually adds unnecessary complexity to his program instead of providing a genuine benefit.

  1. Hungarian Notation

    This was invented by a Microsoft programmer called Charles Simonyi, and was supposed to identify the kind of thing that a variable represented, such as "horizontal coordinates relative to the layout" and "horizontal coordinates relative to the window". Unfortunately he used the word type instead of kind, and this had a different meaning to those who later read his description, so they implemented it according to their understanding of what it meant instead of the author's understanding. The result was two types of Hungarian Notation - Apps Hungarian and Systems Hungarian. You can read a full description of this in Making Wrong Code Look Wrong by Joel Spolsky.

  2. Technical Primary Keys

    When defining the primary key for a database table you should first look for a semantic or natural key, and if one cannot be found then use a technical or surrogate key instead. Some novice designers ignore this advice and go straight for a technical key without further thought. Even when you point out those circumstances where a technical key is not the best solution they dismiss your arguments by saying "But technical keys are the rule!"

  3. Single Responsibility Principle (SRP)

    The original definition stated a class should have only one reason to change, but this was open to so much confusion and mis-interpretation the author later qualified it by saying the following in Test Induced Design Damage:

    GUIs change at a very different rate, and for very different reasons, than business rules. Database schemas change for very different reasons, and at very different rates than business rules. Keeping these concerns (GUI, business rules, database) separate is good design.

    He followed this in The Single Responsibility Principle by saying:

    This is the reason we do not put SQL in JSPs. This is the reason we do not generate HTML in the modules that compute results. This is the reason that business rules should not know the database schema. This is the reason we separate concerns.

    The three areas of logic - GUI, business rules, database - he identified here were a perfect match for the 3-Tier Architecture. As I was already implementing this architecture in my development framework I decided that no additional work was necessary on my part.

    There are some people out there who think that putting an entity's business rules into a single class is wrong - each individual rule should be in a separate class. What these numpties don't realise is that too much separation would break encapsulation, which is why I refuse to go down that route.

  4. Separation of Concerns (SoC)

    Anybody with more than two brain cells to rub together will know that SRP and SoC mean exactly the same thing. There is no difference between "concerned with" and "responsible for". If Robert C. Martin writes articles in which the two terms are interchangeable then who can argue? Yet there are some numpties out there who think that The Single Responsibility Principle and Separation of Concerns do not mean the same thing and A class encapsulates a single responsibility but not a single concern. These people must be suffering from IDD (Intelligence Deficit Disorder).

  5. Open/Closed Principle (OCP)

    This is worded as ""software entities should be open for extension, but closed for modification". The idea is that once completed, the implementation of a class can only be modified to correct errors; new or changed features will require that a different class be created. That class could reuse coding from the original class through inheritance.

    If I followed this rule it would create nothing but problems. The idea that once I have created a class I should not amend it but instead extend it into a subclass is not my idea of sane programming as it would produce a proliferation of subclasses, and I would then have to go through my entire codebase to change all references to the old class to the new one before I could benefit from the amendments. This strikes me as being utter madness, so I choose to ignore it.

  6. Liskov Substitution Principle (LSP)

    If you read this principle properly you should realise that it only applies when you inherit from one concrete class to create a different concrete class, and there are some methods in the superclass which are not relevant in the subclass. I don't. All 50 of my table classes inherits from a single abstract class, and none of its methods are irrelevant. I never inherit from one table class to create a new table, so this rule is irrelevant and I ignore it.

  7. Interface Segregation Principle (ISP)

    Some people seem to think that they have to be seen to be following this rule, so they create interfaces which they then segregate as "proof". I ignore this rule completely for the simple reason that I don't use interfaces anywhere in my code. I don't use them because (a) they are optional, and (b) they serve no useful purpose. For more details please refer to Object Interfaces.

  8. Dependency Inversion Principle (DIP)

    If you bother to read Robert C. Martin's original documentation on this principle you will see that his example COPY program clearly shows that using DI to inject a dependent object which can be supplied from any one of a number of alternative classes can be very useful. This is precisely how I use DI in my framework. However, where a dependent object can only be supplied from a single source then I do not use DI at all. There is no good reason to provide the ability to switch to a different implementation if there will never be a different implementation.

  9. Composite Reuse Principle (CRP)

    This is often phrased as favour composition over inheritance which implies that ALL usage of inheritance is wrong. This is CR(a)P. It is only the misuse of inheritance which causes problems, so programmers should learn how to use it properly so as to maximise its advantages and minimise its disadvantages.

    This is supposed to be a solution to the problem caused by having deep inheritance hierarchies where one concrete class is extended to form another concrete class. If you read the articles on this subject you should see that you can avoid the problem by NOT having deep inheritance hierarchies and by ONLY inheriting from an abstract class. This is precisely what I have been doing in my framework since I created it, so I have absolutely no use for this principle.

Rubbish Rules from mis-invention

By this I mean where someone has invented a rule out of thin air, something which cannot be traced back to a documented source, or something which is added on at a later date by someone who was not the author of the original rule or principle. This may be because of a severe mis-interpretation of a documented rule, one that is so severe that it could be classed as a complete perversion.

  1. You must use OOD before you attempt OOP

    As far as I am concerned this idea is being propagated by those who think that OOP is some complicated process that can only be practised by those who have mystic capabilities. This is complete and utter bollocks rubbish. In my opinion OO programming is exactly the same a Procedural programming except for the addition of encapsulation, inheritance and polymorphism.

    I have been designing and building database applications for 40 years, and in that time I have used three different languages. Regardless of the language the design process has always been exactly the same, it is only the implementation of that design which has been different. The design process produces two things, a logical database design and a list of Use Cases where each use case identifies the database tables that it needs to access in order to carry out its function. It identifies what should be done (the requirements), not how it should be done (the implementation). It is possible to take the same design and implement it in more than one language, and the choice of language may be affected by several factors, such as speed of development, its toolsets, or the availability of experienced programmers.

    When I eventually read what OOD entailed, which was several years after I had completed my framework, I shook my head in utter disbelief. I do not waste my time with this "IS-A" nonsense as each object "is-a" database table. I do not waste my time with this "HAS-A" nonsense where a single class can be composed of more than one database table as this concept does not exist in a database. Each table in the database is a separate entity in its own right, so as far as I am concerned each entity requires its own class.

    A variation of OOD is called Domain Driven Design (DDD) which again is filled with artificial rules which I choose to ignore for reasons stated here. Although my main enterprise application covers a number of different domains every one of those domains is a database application in which every user transaction (use case) touches the database in some way. Because of this single area of commonality I implement each database application in exactly the same way by building each user transaction from a library of Transaction Patterns. My enterprise application currently includes 400 database tables, 1,000 relationships and 3,000 user transactions. Each of these was built from a library of 45 patterns and 12 XSL stylesheets, so don't tell me that it can't be done.

    The language does not (or should not) affect the design, just how that design is implemented. The design is therefore language-agnostic, and the implementation of that design is an entirely separate matter. The language is therefore nothing more than an "implementation detail".

  2. Having a separate class for each database table is not good OO

    Where was this "principle" published? Who is its author? What are the reasons for the existence of this rule?

    If it is so bad then why has Martin Fowler, the author of Patterns of Enterprise Application Architecture defined patterns called Table Module, Class Table Inheritance and Concrete Table Inheritance which specify "one class per table"?

    Amongst the pathetic reasons which support this ridiculous claim was the following:

    Abstract concepts are classes, their instances are objects. IMO The table 'cars' is not an abstract concept but an object in the world.
    Classes are supposed to represent abstract concepts. The concept of a table is abstract. A given SQL table is not, it's an object in the world.

    It is quite clear to me that this numpty simply does not understand the words that he wrote:

    Each table in a database is a different entity and not just a different instance of the same entity. There is a standard concept called "table" but each physical table has a different implementation - its name and its structure. That is why each concrete class simply identifies its name and its structure while all the standard code is inherited from the abstract class.

    Consider the following definition of a "class":

    A class is a blueprint, or prototype, that defines the variables and the methods common to all objects (entities) of a certain kind.

    If you look at the CREATE TABLE script for a table is this not a blueprint? Is not each row within the table a working instance of that blueprint? Is it therefore not unreasonable to put the table's blueprint into a class so that you can create instances of that class to manipulate the instances (rows) within that table?

    Another numpty wrote the following:

    This means you write the same code for each table - select, insert, update, delete again and again. But basically its always the same.

    Wrong! Any code which can be applied to any table is defined in non-abstract methods within the abstract table class and therefore automatically shared by every concrete table class via that standard OO mechanism called inheritance. I suggest you read up on it and try it out for yourself.

    This topic is discussed in more detail in Having a separate class for each database table *IS* good OO.

  3. You must not have the same data names in every layer

    The idea that I should have different data names in each layer is complete and utter bollocks rubbish. Not only have I never worked on a team which practiced this notion, I have used several languages which were based on the assumption that each data element had the same name in every component. To do otherwise would have created masses of effort, so not only did I never see anyone attempt to do this, I never even heard of anyone discussing the possibility.

    Such a ridiculous idea would require extra components to perform data mapping between each layer, so could only come from someone who has been brainwashed into using an Object-Relational Mapper. Such people should be pitied, not emulated.

  4. Your database schema is known to your domain layer

    The idea that the components in the business layer should not be aware that they are communicating with a database is complete and utter bollocks rubbish. It would be like writing a missile control program which is not aware that it is controlling missiles, or an elevator control program which is not aware that it is controlling elevators.

    This idea could only come from someone who does not understand the rules of the 3-Tier Architecture where it is only the component in the Data Access layer which communicates with the database. This means that only the Data Access Object (DAO) can construct and execute SQL queries. This allows the DAO to be constructed from a different class at runtime, thus enabling the DBMS to be switched between MySQL, PostgreSQL, Oracle and SQL Server by changing a single line in a configuration file and without changing a single line of code in any of the other components.

    Data validation is part of the business rules so belongs in the Business layer. Data which is going to be added to the database can only be validated in a Business layer component if that component knows the structure of that table. It must know which columns the table contains and it must know the specifications (type and size, etc) of those columns. If the validation succeeds it does not build and execute an SQL query itself, instead it sends a message to the DAO saying "Add this data to this database table". This means that the Business layer is working with a conceptual model of the database and not a physical model. The Business layer knows that it is working with a database, but it does not know which one, and it certainly does not communicate with the database.

    "Knowing the structure of the database" is not the same as "building and executing SQL queries". One of these is forbidden in the 3-Tier Architecture, the other is not.

  5. You must use an Object-Relational Mapper

    This idea is only promoted by those dimwits who don't understand how relational databases work. They deliberately design their software without any regard to the needs of the database which they regard as nothing more than an "implementation detail". They get somebody with brain cells to design their database but guess what? There is now a mismatch between the software design and the database design. How do these dimwits solve that problem? By generating an additional piece of software to perform the mapping between to the two designs. How does an intelligent person solve the problem? Following the old maxim Prevention is better than Cure it is better to eliminate the problem than to cover up its effects. That is why I always start with the database design, then build my business/domain layer objects around this design with one class for each database table. Result - no mismatch, so no need for additional software to deal with a mismatch. I have automated the method by which changes in the database structure can be conveyed to the software, so it is easy to keep the two structures in sync.

  6. Your design is centered around data instead of functions

    I only write enterprise applications, and these are characterised by the fact that they have a User Interface (Presentation layer) at the front, a relational database (Data Access layer) and a Business/Domain layer in the middle to transfer the data between those two and to process all business rules. OOP involves the creation of objects from classes, and a class involves the bundling of data and the methods that operate on that data within a single unit or "capsule", hence the term encapsulation. Each object in the business/domain layer of the application represents an object for which the application is required to maintain data. In a database application each of these objects is a database table, and as everyone experienced in SQL will tell you, each database table, regardless of its contents, is subject to the same set of operations - Create, Read, Update and Delete (CRUD).

    I have never seen anybody suggest that the correct way to deal with these factors - data and operations - would be to create a separate class for each of the CRUD operations and then tie them to a particular database table with its own set of business rules at runtime, so I do what is intuitive and logical and create a separate class for each table which has the operations to maintain the contents of that table. Note that these operations are not duplicated within each table class as that would violate the DRY principle. Instead they are defined within a single abstract table class which is then inherited by every concrete table class.

    When designing a database application for a new business domain here are two basic parts - the database and a list of use cases which manipulate the data in that database. Each use case is responsible for performing one or more of the CRUD operations on one or more tables. The list of operations is therefore fixed whereas the design of the database is totally flexible. This means that the design of the database is far more important and comes before the design of the software which can be considered as being nothing more than "an implementation detail". This can be summed in in the following quote:

    Smart data structures and dumb code works a lot better than the other way around.

    Eric S. Raymond, "The Cathedral and the Bazaar"

    Get the database wrong and no amount of code will get you out of the mess that you have made yourself. Get the database right and the coding part will be much easier.

  7. You must use the constructor to populate an object with valid data

    Where is this documented? This obviously is a mis-interpretation of the statement: "A properly written constructor leaves the resulting object in a valid state". In this context the term "valid state" does not mean the same thing as "data within the object must be valid". It actually means the following: "A valid object is one that can accept calls on any of its public methods".

    This rule also implies that the data must be validated outside of the object before it can be inserted, but I am afraid that this would violate the principle of information hiding which encapsulation is supposed to enforce. All business rules concerning an object, and this includes data validation rules, are supposed to be buried within the object and hidden from the outside world.

    This topic is discussed in more detail in Re: Objects should be constructed in one go.

  8. You must have a separate class property for each table column

    Where is this documented? Just because it is used in some examples does not mean that it is a golden rule. When the SUBMIT button is pressed in an HTML form the data is sent to the server in a single $_POST array, not as separate columns. When data is read from a database the SELECT query returns a result consisting of zero or more rows where each row is an array containing one or more columns. I found it far easier to keep this data in a single variable called $fieldarray than to introduce additional code to split the array into its component parts and then deal with each component separately. This means that I can make changes to the contents of that array, such as adding or removing columns, without having to change any method signatures, which is a good example of loose coupling.

  9. You must access each class property using getters and setters

    This is only relevant if you have a separate class property for each table column (see previous point). As I use a single property for a complete dataset I can put the data into an object as a single input argument on a method call (as in $dbobject->insertRecord($_POST)) and get it out again as a single result set.

  10. An object can only deal with a single database row

    Not according to Martin Fowler and his Table Module pattern. My single $fieldarray property allows me to deal with any number of columns from any number of rows in a single object, so why on earth should I change this to use a method which is less efficient and more cumbersome?

  11. You must validate all data before it is put into the Model

    Where is that documented? Each domain object (model) is responsible for all its business rules, and as data validation is part of those business rules it means that the validation should be performed inside the model, not outside. If you take the processing of business rules out of the domain object you will end up with nothing but an anemic domain model which is considered to be a bad thing.

    The only genuine rule regarding data validation is that it must be performed BEFORE the insert/update query is executed as invalid data will cause the query to fail and the program to abort. All data should be validated in the code and returned to the user with a suitable error message should any problem be found.

  12. You must validate a value within its setter

    I don't use setters, so I can't. All data validation is performed by a standard validation object as part of the insertRecord() or updateRecord() operation. If the validation fails then the insert/update is abandoned and the method call returns an error message instead.

  13. In MVC a Controller can only speak to one Model

    Where is this documented? Just because it is used in some examples does not mean that it is a golden rule. While most of my reusable page controllers do indeed work with no more than one model, I have some which work with 2, 3 or even 4. If this were truly "wrong" then it would cause problems, but it doesn't, so it isn't.

  14. In MVC a Model can only have one Controller

    Where is this documented? Just because it is used in some examples does not mean that it is a golden rule. Each of my model classes inherits its public methods from my abstract table class, and as each controller speaks to its model(s) using these methods it means that any model can be accessed by any controller, and any controller can be used to access any model.

  15. You must define a collection of Finder methods to access your data

    This is only relevant if you use an Object-Relational Mapper. Those of us who understand how databases work know that an SQL query does not use a variety of finder methods, it uses a single WHERE string on a SELECT statement which can handle a multitude of possibilities. I don't need to write special methods to manipulate this string as the PHP language already contains a huge selection of functions to manipulate strings. Once constructed I can use this string in a standard $result = $dbobject->getData($where) command. I can even pass this string from one component to another with great ease.

  16. You must create a separate method for each use case

    Do you realise how much work this would create? In my ERP application I have 3,500+ tasks (use cases) and 400+ model classes. If I had a separate method in a Model for each of these tasks it would mean the following:

    As the primary objective of using OOP in the first place is supposed to be to increase the amount of reusable code, the lack of reusability that following this principle would produce is a step in the wrong direction.

    In my methodology I create an entry on the TASK table of my MENU database for each use case. This entry points to a component script on the file system which in turn points to a Controller and one or more Models where all communication between them is governed by the methods that were defined in the abstract table class. This means that the use case name is defined in the MENU database and not as a method name within a class. The user selects which task he wants to run by its name in the MENU database, and the Controller which is activated for that task uses generic methods to perform whatever action is required.

    I do not have to create any special methods in a Model as all the public methods I need are inherited from a single abstract class. I do not have to put any special method calls into any Controller as they only use the same public methods which are defined in the abstract class. Each of my Controllers has been designed to be reusable with ANY Model, so is available as a pre-written component in my framework. So if I have 50 Controllers and 400 Models this equates to 50 x 400 = 20,000 (yes, TWENTY THOUSAND!) opportunities for polymorphism. If I followed your rule I would not have this level of reusability, so I don't follow your rule.

  17. You should use a Front Controller

    A colleague once told me that all the big boys use a front controller, and if I wanted to be in their club then I should use one as well. He is now an ex-colleague. In my methodology each URL in the application points directly to a component script in the file system, and this script identifies IMMEDIATELY what parts of the application are being used to do what. This makes debugging far easier as you don't have trawl through multiple lines of code in various front controller and router objects to obtain what can be expressed in three lines.

  18. You are using the wrong design patterns

    There is no such thing as the "right" design patterns. Each programmer is allowed to use whatever design patterns he sees fit in whatever way he sees fit. To me design patterns are just like training wheels on a bicycle - they're OK when you are a novice, but after that they become more of a hindrance than a help. An experienced and competent programmer does not write software by first making a list of design patterns which it should use, he simply writes code and design patterns appear naturally of their own accord. This is what Erich Gamma said at an interview in May 2005:

    Do not start immediately throwing patterns into a design, but use them as you go and understand more of the problem. Because of this I really like to use patterns after the fact, refactoring to patterns.

    If you bothered to examine my framework in detail you should notice where I make use of the following patterns:

    I don't use other patterns simply because I have found no use for them.

    Another reason why I do not use design patterns with the same religious fervour as others is that they are not proper patterns at all. They do not provide pre-written code that can be used over and over again, they merely provide an outline of a design which you then have to implement yourself by writing your own code over and over again. I much prefer to use Transaction Patterns as they provide pre-written and reusable code which can be linked with any Model to produce a working transaction without the need to write any code whatsoever. That is why each use case has its own component script which does nothing but identify which Controller should be linked with which Model and which View.

  19. Encapsulating an entity does not result in a single class

    Some people tell me that my abstract table class has so many methods that it surely must be breaking SRP and that it surely must be a God Object. They do not understand that the content of a class is not governed by the ability to count but by the ability to think. The description of encapsulation makes it quite clear that ALL the properties and ALL the methods for an entity should be put into the SAME class, and not spread across multiple classes. I am already obeying Robert C. Martin's definition of SRP by putting presentation logic, business logic and database logic into different objects, and I am also doing the right thing by following what he says in his article about Anemic Domain Models:

    It's also worth emphasizing that putting behavior into the domain objects should not contradict the solid approach of using layering to separate domain logic from such things as persistence and presentation responsibilities. The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it

    He does not say that each of those areas of logic - validations, calculations, business rules, etc - should go into a separate object, he specifically says that ALL the business logic for a single entity should go into a SINGLE object while presentation logic and database logic should be handled separately. It is quite clear to me - there should be one object in the business/domain layer for each entity, and in a database application each of those objects is a table.

  20. Your code uses global variables

    There is nothing wrong with using global variables in moderation. Problems only arise when they are used in inappropriate circumstances.

  21. Your code uses singletons
  22. There is more than one way in which this design pattern can be implemented, each with its own set of pros and cons. If you choose the implementation which has all the cons and none of the pros then it is your implementation which is at fault, not the pattern itself. For a description of several different implementations please take a look at Singletons are NOT evil.

Other rules which I regularly break are identified in the following articles:

Who decides when a practice is best?

When I was first told that I was a bad boy for not following the rules (expressed as "industry standards") I instantly raised the following questions:

In You should listen to your masters and follow 'best practice' these were the answers I received:

Each language has it's collection common of knowledge known as best-practices. If you want to write good software in whatever language you choose there are basically two approaches:
  1. I'm so much smarter than all the people who have worked with this language before, I can do it better, because I *really* know how it should be done. Screw the best-practices and the knowledge they have collected through decades of experience, I'll do it my way.
  2. I'll better listen to the masters if I want to be an expert in this language.
If all the programmers using one language adopt some practice because it seems great then that becomes a 'best practice' for the language. Look around and you'll see dozens of examples of this everyday. 'Do not break encapsulation' is a good example of a 'best practice'. We are talking about those practices that programmers tend to pick up and imitate. If a company (or, more likely, individual programmer) develops a practice and it catches on worldwide, and most of the programmers using that language adopt it as one of the loose rules to live by, the it can be considered a best practice for that language. If a company comes up with a rule and the rule does not catch on worldwide, then the rule is not a best practice for the language.
Nobody creates them, they emerge by the collective work of thousands of developers.
There is no bible, but there is a large body of books and articles and newsgroup discussions that all taken together is the collected knowledge.
Certain books becomes classics, classics are considered such because a large number of people view them as such. They might of course all be wrong, which is why we have the occasional paradigm shift :)
All I can say that I assume nobody comes up with practices because they want to be *worse* programmers, so I think it's safe to assume that they do so to produce better software, to be better at their craft.

In the difference between "common" and "best" practice these were the answers I received:

I didn't make it industry standard myself, the industry leaders/elite programmers made it so
when majority of professional and elite programmers hold the same opinions, they become universal agreement
it is universal so long as the majority of programmers agree with a certain concept. It doesn't need to be 100%, since we need to factor out incompetent and trollish programmers like you
The FIG standard was originally just the standard for this one group, but once it becomes widely adopted by most PHP programmers, it becomes the industry standard.
it is an industry standard, these group of coders are the industry giants (ie. FIG) and they make the industry standards. Its not a personal preferences, its voted and approved by tens or even hundreds of elite coders.
Sure there are masterminds with different and contrary opinions, but in the very end the majority rules

These answers can be summed up as follows:

I find this notion of an unknown panel of "experts" defining the rules which the entire programming community is expected to follow to be totally unacceptable. After reading what they have produced I have the distinct impression that they are just a bunch of academics who have little or no experience of developing applications in the commercial world. They are high on theory but low on practice. Only someone who has actually developed many components in a database application, and experienced both bad and good ideas, can possibly tell you what is involved and which ideas are best. When somebody says that OO Design is incompatible with Database Design this points to a problem in one of these areas:

In my own case I ignored these theories (mainly because I did not know they existed) and aimed to produce the most cost-effective solution in a practical way.

Apart from the genuine rules which identify what should be achieved I will never accept a set of universal rules which identify how those objectives should be achieved. There is no such thing as a single universally-accepted implementation for any of these objectives. You give the same problem to ten different programmers and you will get ten different solutions. So which one is the "right" solution? Does this automatically make all the other solutions "wrong"? To me any solution which works cannot be wrong just as a solution which does not work cannot be right. The real differentiator between all those solutions which work is how cost-effective they are. If a solution using methodology "A" costs X amount and a solution using methodology "B" costs 2X then which of those two would be more attractive to a paying customer? The fact that the developers using methodology "B" think that it is more "proper" or "pure" when compared with the alternatives does not carry any weight with the customer as cost will always trump purity every day of the week.

I have always tailored my development methodology to produce the best results in the shortest time, which sometimes means favoring one possible approach over another, so when I began publishing articles on my methodology I was surprised to be told that the results which I achieved were irrelevant because my method of achieving them was considered to be impure and far from being "best". "Best" is a comparative term where various different practices can be rated as either "bad", 'badder', 'baddest', "good", "better" or "best". In order to be graded in this way you must start with a set of different practices so you can then compare each against the other. In far too many cases when a particular practice is being proposed as being "best" there are no comparisons with any alternatives, which means that that an inexperienced reader cannot decide for himself whether the proposal has merit or not. In many cases the opinion is stated as if it were a fact that is so obvious that it needs no further discussion, no justification. I often see nothing but the phrase This is how it should be done or This is how proper programmers do it, but that condescending attitude does not carry any weight with me.

When somebody tells me This is how it is supposed to be done my response is always Where is your proof? When I get such answers as If you don't do it this way then your code is impure, less readable, less maintainable and therefore bad I dismiss them as being subjective instead of objective, as expressing personal opinions instead of provable facts. I can dismiss claims that my code is unreadable and unmaintainable simply by the fact that I have been maintaining and enhancing it since it was first created in 2003. My current business partner became my business partner simply because he used my open source framework to build his own application, and he liked it so much that when I demonstrated to him the ERP application that I had built he was so impressed that he suggested forming a partnership with me so that together we could enhance and expand it into something bigger and better. This application can now be found at

While most other developers do not have any experience with alternative implementations of the rules I had 20 years of such experience. Before I switched to PHP in 2002 I had developed database applications for several software houses using non-OO languages such as COBOL and UNIFACE, and a variety of hierarchical, network and relational databases. I worked for several different organisations, mostly software houses, on a large variety of projects, and I found that each project had its own set of development standards. The idea of different organisations following the same set of standards, even if they used the same language, simply did not exist, mainly because each organisation had its own ideas and preferences on how this should be done. Achieving consensus among different groups of programmers would always be an uphill struggle, just like herding cats.

I found that each set of standards had its own strengths and weaknesses, and sometimes would contradict what I had encountered previously. When I was employed by a relatively new software house which did not have any formal standards I began to formulate my own by including the best ideas I had encountered so far, totally ignoring the worst, and adding in a few touches of my own. I started to create a library of common routines which was transformed into a framework in 1985, following which my personal standards were adopted as the company standards because they provided higher levels of productivity which, in a software house, is vitally important. These standards are documented in my COBOL page and a later UNIFACE page.

Having become quite proficient at creating development methodologies which greatly helped the production of cost-effective software I decided to switch to using PHP in 2002 so that I could develop web-based database applications using my own thoughts and ideas instead of being limited by the (lack of) expertise of others. I found the PHP manual, plus the growing number of online resources, a great help in coming to grips with the OO concepts of encapsulation, inheritance and polymorphism. These explained the basics of how to create classes and instantiate them into objects, then use inheritance to share common code among several classes. I experimented with several code samples to see easy it would be to convert my previous frameworks (first in COBOL and second in UNIFACE) into PHP with the following pre-requisites:

The results were so successful I started to publish articles of my own, such as:

These were followed by A Sample PHP Application in November 2003 which showed all these ideas put into action.

It was then that I started receiving comments such as "real OO programmers don't do it that way". When I dug for details and obtained code samples which showed the "right" way I was astonished at how complicated it was, how convoluted it was, how ugly it was, and how inefficient it was. Some of the ideas were so ridiculous I was amazed that the author had the gall to publish them. I had always understood that programming is an art, not a science, and I had encountered some programmers who were similar to Michelangelo in that they produced works of beauty and elegance, while others were similar to Picasso whose works were just a mish-mash of colours and shapes which bore little resemblance to what they were supposed to represent.

The fact that there is no single point of reference for these things called "industry standards" means that anybody can write a book or an internet article and claim it to be the "new" standard. As far as I am concerned these are nothing more than personal opinions (just as my articles are just personal opinions) and not definitive rules, and as such the reader can choose to either follow them or ignore them. Here are examples of some articles which profess to identify new rules:

Note that the titles of these articles include the words should and must which indicate that they are more than simple suggestions. Which one of these do I follow? Neither. Each of my classes represents a different database table, and I use the constructor to load the metadata for that table. Have I ever stated this as a rule which everyone else should follow? Absolutely not. Every programmer is (or should be) free to put whatever code they like in their constructors, with the only absolute rule being the following:

a constructor must leave the object in a condition in which it can respond to calls on it public methods

I have often been told that you cannot become a good OO programmer unless you use design patterns, but in all the books and articles I have read one simple fact is quite clear - all these patterns are nothing more than the designs for solutions to certain problems, they do not provide the code to implement those solutions. This is because there are many different OO languages with different functionality, and code for one language won't work in another. Different programmers using the same language may even create different implementations, but this is not a problem provided that the implementation is effective. The bottom line is that all these patterns and principles are supposed to do no more than identify what needs to be done and not how it should be done. That is the responsibility of the individual programmer. It is also the responsibility of the programmer to only use those patterns which are appropriate otherwise he is adding needless complexity with no discernible benefit. My own framework contains implementations of the following patterns and principles:

So, if my code follows the genuine rules as well as implementations of the patterns and principles listed above, why do my critics insist on telling me that my methods are wrong even though the results show that they are effective?


It is important to understand that the purpose of a software developer is to develop cost-effective software for the benefit of the paying customer, and not to impress other developers with the cleverness of his design or his ability to follow a set of arbitrary rules in a dogmatic and pedantic fashion. The dogmatist will follow a set of rules and assume that the results will be satisfactory. The pragmatist will aim to produce the best result possible and will follow those rules which support this endeavour and ignore those which do not. The only rules which a programmer should not break are those which would cause the program to either not run at all, or to run badly.

When you work in a software house building applications for different clients you will quickly realise that the most important factor is the ability to deliver effective solutions as quickly and cheaply as possible. Hitting the target quickly is more important than following rules blindly. Time is money, so the more time you spend in creating a solution the more expensive that solution will be. When you are competing against other software houses then he who finishes first takes the prize. A good set of development standards coupled with a good library of reusable software (or even better, a good framework) will always give you an edge over your competitors.

The idea that there is a single set of "standards", "rules" or "best practices" which every programmer should follow is a load of nonsense. Each group follows the standards which are best for them, and they will strongly resist having some outsider's standards imposed on them. Remember that programming is an Art and not a Science, which means that programming cannot made be made subject to scientific rules, it is subject to artistic interpretation.

Some of the rules which I am told I am breaking may actually be based on a genuine idea, but which have been seriously misinterpreted because the original description of the principle was inadequate, vague or wishy-washy. In other cases some people simply do not understand what they read or choose to read what isn't there. In some cases I have come across rules which are simply figments of someone's wild imagination and have never been published by any professional body.

As far as I am concerned I will not accept any rule unless that rule can be justified. I need to know under what circumstances it is supposed to produce benefits, and I need to know what problems will arise if I don't follow the rule. Rules which are a matter of opinion and not fact carry very little weight in my universe. If a rule cannot be justified then as far as I am concerned it has no right to exist and I should be able to ignore it with impunity.

When I write software I, like every other developer, am constrained by certain limitations:

I refuse to be constrained by the limitations of your intellect as it would be the equivalent of going back in time and living with neanderthals.

As I have said in a previous article:

Progress comes from innovation, not imitation. Innovation is not possible unless you do things differently, unless you rewrite the rules.

If were to do everything the same as you (which includes following the same interpretation of the rules as you and implementing them in the same way as you) then I would be no better than you, and I'm afraid that your best is simply not good enough.

In order to be better I have to start by being different, but all you can do is attack me for being different without noticing that my results are superior to yours.

You argument is that because I am breaking your precious rules then my code must be crap. What you fail to understand is that if I can produce superior results by ignoring your precious rules then it is your rules which are crap.

If the only bad thing that happens if I ignore one of your precious rules is that I offend your delicate sensibilities, then all I can say is Aw Diddums!

Those programmers who insist on following rules blindly without understanding the origin and purpose of those rules, the circumstances in which those rules are most appropriate, and how to implement them effectively, are in great danger of becoming nothing more than Cargo Cult Programmers whose level of competence needs to be seriously questioned. Follow the teachings of such people at your peril!

I have often been told that I should follow the teachings of my "superiors" as then I would be Standing on the Shoulders of Giants. I disagree completely. As the results that I achieve are superior to theirs those results would be seriously degraded if I followed their "advice". In effect I would be Paddling in the Poo of Pygmies.

When somebody gives you a document labelled "best practices", "project standards, or "rules" and tells you that they are cast in stone and cannot be broken they are actually placing restrictions on what you can do to get the job done. This is like being placed in a box where there is no room for deviation or experimentation with alternative and perhaps better methods. Progress is not made by continuously doing the same old thing in the same old way. You have to think outside the box, push the envelope and expand your horizons.

If you think that I am exaggerating when I claim that my results are superior, that my levels of productivity are higher, then I dare you take this challenge. If, using your favourite framework, you cannot produce results which are better than mine then how can you possibly claim that your methods are better than mine?

Here endeth the lesson. Don't applaud, just throw money.


Amendment History

10 Jan 2021 Added Who decides when a practice is best?
17 Apr 2020 Updated the table of contents to provide hyperlinks into each item with each section.
Added Your code uses global variables
Added Your code uses singletons