Not-so-SOLID OO Principles

By Tony Marston

8th June 2011
Amended 1st November 2016

Introduction

The premise of the article SOLID OO Principles is that in order to be any "good" at OO programming or design you must follow the SOLID principles otherwise it will be highly unlikely that you will create a system which is maintainable and extensible over time. This to me is another glaring example of the Snake Oil pattern, or "OO programming according to the church of <enter your favourite religion here>". These principles are nothing but fake medicine being presented to the gullible as the universal cure-all. In particular the words "highly unlikely" lead me to the following observations:

Some of these principles may have merit in the minds of their authors, but to the rest of us they may be totally worthless. For example, some egotistical zealot invents the rule "Thou shalt not eat cheese on a Monday!" What happens if I ignore this rule? Does the world come to an end? Does the sun stop shining? Do the birds stop singing? Does the grass stop growing? Does my house burn down? Does my dog run away? Does my hair fall out? If I ignore this rule and nothing bad happens, then why am I "wrong"? If I follow this rule and nothing good happens, then why am I "right"?

It is possible to write programs which are maintainable and extensible WITHOUT following these principles, so following them is no guarantee of success, just as not following them is no guarantee of failure. Rather than being "solid" these principles are vague, wishy-washy, airy-fairy, and have very little substance at all. They are open to interpretation and therefore mis-interpretation and over-interpretation. Any competent programmer will be able to see through the smoke with very little effort.

My own development infrastructure is based on the 3 Tier Architecture, which means that it contains the following:

It also contains an implementation of the Model-View-Controller (MVC) design pattern, which means that it contains the following:

These two patterns, and the way they overlap, are shown in Figure 1:

Figure 1 - MVC plus 3 Tier Architecture

model-view-controller-03a (5K)

Please note that the 3 Tier Architecture and Model-View-Controller (MVC) design pattern are not the same thing.

The results of my approach clearly show the following:

Yet in spite of all this my critics (of which there are many) still insist that my implementation is wrong simply because it breaks the rules (or their interpretation of their chosen rules).

My primary criticism of each of the SOLID principles is that, like the whole idea of Object Oriented Programming, the basic rules may be complete in the mind of the original authors, but when examined by others they are open to vast amounts of interpretation and therefore mis-interpretation. These "others" fall roughly into one of two camps - the moderates and the extremists.

My secondary criticism of these principles is that they do not come with a "solid" reason for using them. If they are supposed to be a solution to a problem then I want to see the following:

If the only bad thing that happens if I choose to ignore any of these principles is that I offend someone's delicate sensibilities (aah, diddums!), then I'm afraid that the principle does not have enough substance for me to bother with it, in which case I can consign it to the dustbin and not waste any of my valuable time on it.

It is also worth noting here that some of the problem/solution combinations I have come across on the interweb thingy are restricted to a particular language or a particular group of languages. PHP is a dynamically-typed scripted language, therefore does not have the problems encountered in a strongly-typed or compiled language. PHP may also achieve certain things in a different manner from other languages, therefore something which may be a problem in one of those other languages simply doesn't exist as a problem in PHP.

There is an old axiom in the engineering world which states "If it ain't broke then don't fix it". If I have code that works why should I change it so that it satisfies your idea of how it should be written? Refactoring code unnecessarily is often a good way to introduce new bugs into the system.

A similar saying is "If I don't have your problem then I don't need your solution". Too many so-called "experts" see an idea or design pattern that has benefits in a limited set of circumstances, so they instantly come up with a blanket rule that it should be implemented in all circumstances without further thought. If you are not prepared to think about what you are doing, and why, then how can you be sure that you are not introducing a problem instead of a solution.

Another old saying is "prevention is better than cure". Sometimes a proposed solution does nothing more than mask the symptoms of the problem instead of actually curing it. For example, if your software structure is different from your database structure then the popular solution is to implement an Object Relational Mapper to deal with the differences. My solution would be totally different - eliminate the problem by not having incompatible structures in the first place!

S - Single Responsibility Principle

The Single Responsibility Principle (SRP), also known as Separation of Concerns (SoC), states that an object should have only a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. But what is this thing called "responsibility" or "concern"? How do you know when a class has too many and should be split? When you start the splitting process, how do you know when to stop? In his article Test Induced Design Damage? Robert C. Martin (Uncle Bob) provides this description:

How do you separate concerns? You separate behaviors that change at different times for different reasons. Things that change together you keep together. Things that change apart you keep apart.

GUIs change at a very different rate, and for very different reasons, than business rules. Database schemas change for very different reasons, and at very different rates than business rules. Keeping these concerns (GUI, business rules, database) separate is good design.

If you take a look at Figure 1 you will see that the GUI is handled in the Presentation layer, business rules are handled in the Business layer, and database access is handled in the Data Access layer. This conforms to Uncle Bob's description, so how can it possibly be wrong?

In a later article called The Single Responsibility Principle Uncle Bob also wrote the following

This is the reason we do not put SQL in JSPs. This is the reason we do not generate HTML in the modules that compute results. This is the reason that business rules should not know the database schema. This is the reason we separate concerns.

What Uncle Bob is describing here is the 3-Tier Architecture which has three separate layers - the Presentation layer, the Business layer and the Data Access layer - and which I have implemented in my framework. This architecture also has its own set of rules which I have followed to the letter. So if I have split my application into the three separate layers which were identified by Uncle Bob then who are you to tell me that I am wrong?

Later on in the same article Uncle Bob also says the following:

Another wording for the Single Responsibility Principle is:
Gather together the things that change for the same reasons. Separate those things that change for different reasons.

If you think about this you'll realize that this is just another way to define cohesion and coupling. We want to increase the cohesion between things that change for the same reasons, and we want to decrease the coupling between those things that change for different reasons.

Unfortunately there are some people out there who skip over the bit where Uncle Bob identifies the three areas of responsibility which should be separated - GUI, business rules and database - and instead focus on the term "reason to change". By focusing on the wrong term, then applying over-enthusiastic or even perverse interpretations of this term, this causes them to go far, far beyond the three areas identified by Uncle Bob, and I'm afraid that this is where they and I part company.

In Robert C. Martin's book Agile Principles, Patterns, and Practices in C# he wrote about the Single Responsibility Principle in which he states the following:

A better design is to separate the two responsibilities (computation and GUI) into two completely different classes as shown in Figure 8-2.
...
Figure 8-4 shows a common violation of the SRP. The Employee class contains business rules and persistence control. These two responsibilities should almost never be mixed. Business rules tend to change frequently, and though persistence may not change as frequently, it changes for completely different reasons. Binding business rules to the persistence subsystem is asking for trouble.

It is quite clear to me that he is saying that GUI logic, business logic and database (persistence) logic are separate responsibilities which have different reasons to change therefore should each be in their own class.

Martin Fowler also describes this separation into three layers in his article PresentationDomainDataLayering where he refers to the "Business" layer as the "Domain" layer, but then he goes and screws it up by introducing a Service layer and a Data Mapper layer which I consider to be both useless and unnecessary. My code works perfectly well without them, so they can't be that important. However, in his article AnemicDomainModel Martin Fowler says the following:

It's also worth emphasizing that putting behavior into the domain objects should not contradict the solid approach of using layering to separate domain logic from such things as persistence and presentation responsibilities. The logic that should be in a domain object is domain logic - validations, calculations, business rules - whatever you like to call it.

Here you can see that splitting an application into three areas of responsibility - presentation logic, domain logic and persistence logic - is a perfectly acceptable approach. All I have done is extend it slightly by splitting the Presentation layer into two, thus providing me with a View and Controller which then conforms to the MVC design pattern.

Using the 3-Tier Architecture has been of great benefit in the development of my framework and the enterprise applications which I have created using it. That is why if I want to switch the DBMS from MySQL to something else like PostgreSQL, Oracle or SQL Server I need only change the component in the Data Access layer. If I want to change the output from HTML to something else like PDF, CSV or XML I need only change the component in the Presentation layer. Each component in the Business layer handles the data for a single business entity (database table), so this only changes if the table's structure, data validation rules or business rules change. The Business layer is not affected by a change in the DBMS engine, nor a change in the way its data is presented to the user.

So if my implementation follows the descriptions provided by Uncle Bob, who invented the term, and Martin Fowler, who is the author of Patterns of Enterprise Application Architecture, then who the hell are you to tell me that I am wrong?

If you read what Uncle Bob wrote you would also see the following caveat:

If, on the other hand, the application is not changing in ways that cause the two responsibilities to change at different times, then there is no need to separate them. Indeed, separating them would smell of Needless Complexity.

There is a corollary here. An axis of change is only an axis of change if the changes actually occur. It is not wise to apply the SRP, or any other principle for that matter, if there is no symptom.

What he is saying here is that you should not go too far by putting every perceived responsibility in its own class otherwise you may end up with code that is more complex than it need be. You have to strike a balance between coupling and cohesion, and this requires intelligent thought and not the blind application of an academic theory. For example, the domain object (the Model in MVC, or the Business layer in 3TA) should contain only business rules (note the plural) and not any presentation or database logic, but some people go too far and treat each individual business rule as a separate "reason for change" and demand that each one be put in its own class. This has the effect of replacing cohesion with fragmentation.

How to separate responsibilities and concerns

The idea behind this principle is that you can take a huge monolithic piece of code and break it down into smaller self-contained modules. This means that you must start with something big and then split it into smaller units. What is the biggest thing in an application? Why the whole application, of course. Nobody who is sane would suggest that you put the entire application into a single class, just as you would not put all of your data into a single table in your database, so the application should be split into smaller units, but what are these "smaller units"? An application is comprised of a number of different components which are known as "units of work" or user transactions or use cases which can be selected from a menu to allow a user to complete a task. These components are usually developed one at a time and different developers should be able to work on different components at the same time without treading on each other's toes. In time an application can grow to have hundreds or even thousands of user transactions, so the starting point for this splitting process should be the user transaction. Every user transactions has code which deals with the user interface, business rules and database access, and it is considered "good design" if the code for each of these areas is separated out into its own module. A beneficial consequence of this process could be that some of these modules end up by being sharable among several or even many user transactions, so you could end up with more reusable code yet less code overall. For example, in my own development environment I have implemented a combination of the 3-Tier Architecture and MVC design pattern which allows me to have the following reusable components:

Although the splitting process may appear to do nothing but increase the number of components, if you can make some of those components sharable and reusable you may actually be able to reduce the total number of components overall.

Be aware that if you take this splitting process too far then instead of a modular system comprised of cohesive units you will end up with a fragmented system where all unity is destroyed. A system which contains a huge number of tiny fragments will have increased coupling and decreased readability when compared with a system which contains a smaller number of cohesive and unified modules.

Some more thoughts on this topic can be found at:

Not enough separation

If you build a user transaction where all the code is contained within a single script you have an example of the architecture shown in Figure 2:

Figure 2 - The 1-Tier Architecture

infrastructure-01 (5K)

This is a classic example of a monolithic program which does everything, and which is difficult to change. In the OO world a class which tries to do everything is known as a "God" class. This means that you cannot make a change in one of those areas without having to change the whole component. It is only by splitting that code into separate components, where each component is responsible for only one of those areas, will you have a modular system with reusable and interchangeable modules. This satisfies Robert C. Martin's description as it allows you to make a change in any one of those layers without affecting any of the others. You can take this separation a step further by splitting the Presentation layer into separate components for the View and Controller, as shown in Figure 1.

Another problem I have encountered quite often in other people's designs is deciding on the size and scope of objects in the Business layer (or Model in the MVC design pattern). As I design and build nothing but database applications my natural inclination is to create a separate class for each database table, but I have been told on more than one occasion that this is not good OO. This means that the "proper" approach, according to those people who consider themselves to be experts in such matters, is to create compound objects which deal with multiple database tables. The structure shown in Figure 3 identifies a single Sales Order object which has data spread across several tables in the database:

Figure 3 - Single object dealing with multiple tables

order-object (2K)

Far too many OO programmers seem to think that the concept of an "order" requires a single class which encompasses all of the data even when that data is split across several database tables. What they totally fail to take into consideration is that it will be necessary, within the application, to write to or read from tables individually rather than collectively. Each database table should really be considered as a separate entity in its own right by virtue of the fact that it has its own properties, its own methods and its own validation rules. The compound class will therefore require separate methods for each table within the collection, and these method names must contain the name of the table on which they operate and the operation which is to be performed. This in turn means that there must be special controllers which reference these unique method names, which in turn means that the controller(s) are tightly coupled to this single compound class. As tight coupling is supposed to be a bad thing, how can this structure be justified?

Too much separation

When splitting a large piece of code into smaller units you are supposed to strike a balance between cohesion and coupling. Too much of one could lead to too little of the other. This is what Tom DeMarco wrote in his book Structured Analysis and System Specification:

Cohesion is a measure of the strength of association of the elements inside a module. A highly cohesive module is a collection of statements and data items that should be treated as a whole because they are so closely related. Any attempt to divide them would only result in increased coupling and decreased readability.

Too many people take the idea that a "single responsibility" means "to do one thing, and only one thing" instead of "have a single reason to change", so instead of creating a modular system (containing highly cohesive units) they end up with a totally fragmented system (with all unity destroyed) in which each class has only one method (well, if it has more than one method then it must be doing more than one thing, right?) and each method only has one line of code (well, if it has more than one line of code then it must be doing more than one thing, right?). This, in my humble opinion, is a totally perverse interpretation of SRP and results in ravioli code, a mass of tiny classes which end up by being less readable, less usable, less efficient, less testable and less maintainable. This is like having an ant colony with a huge number of workers where each worker does something different. When you look at this mass of ants, how do you decide who does what? Where do you look to find the source of a bug, or where to make a change? A prime example of this is a certain open source email library which uses 100 classes, as shown in Figure 4:

Figure 4 - Too much separation

too-many-classes (1K)

This appalling design is made possible by constructing some classes which contain a single method, and having some methods which contain a single line of code. But who in their right minds would create 100 classes just to send an email? WTF!!!

This particular problem is discussed further by Brandon Savage in his article Avoiding Object Oriented Overkill which contains the following statement:

The concept that large tasks are broken into smaller, more manageable objects is fundamental. Breaking large tasks into smaller objects works, because it improves maintainability and modularity. It encourages reuse. When done correctly, it reduces code duplication. But once it's learned, do we ever learn when and where to stop abstracting? Too often, I come across code that is so abstracted it's nearly impossible to follow or understand.

The confusion over the idea that "responsibility" should be treated as "reason for change" is discussed in I don't love the single responsibility principle In which Marco Cecconi says the following:

The purpose of classes is to organize code as to minimize complexity. Therefore, classes should be:
  1. small enough to lower coupling, but
  2. large enough to maximize cohesion.
By default, choose to group by functionality.

He also points out that an over-enthusiastic implementation of SRP can result in large numbers of anemic micro-classes that do little and complicate the organisation of the code base.

Uncle Bob also wrote an article called One Thing: Extract till you Drop in which he advocated that you should extract all the different actions out of a function or method until it is physically impossible to extract anything else. In theory the end result could be a large number of methods each containing a single line of code. While this may sound good as an academic exercise, is it a worthwhile in the real world? Some of the people who commented on this article raised the following objections:

Some people try to justify this excessive proliferation of classes by inventing a totally artificial rule which says "No method should have more than X lines, and no class should have more than Y methods". What these dunderheads fail to realise is that such a rule completely violates the principle of encapsulation which states that ALL the properties and ALL the methods for an object should be assembled into a SINGLE class. This means that splitting off an object's properties into separate classes, or an object's methods into separate classes, is a clear violation of this fundamental principle. It also has the effect of breaking a highly cohesive module which contains a set of closely related functions into a collection of small and less cohesive fragments. This results in increased coupling and decreased readability, and therefore should be avoided.

In my own development framework, the basic structure of which is shown in Figure 1, when I create the Model components I go as far as creating a separate class for each database table. Anything less would be not enough, and anything more would be too much.

A balanced amount of separation

In my own development infrastructure, which is shown in Figure 1, each component has a separate and distinct responsibility:

Note that only the Model classes in the Business layer are application-aware. All the Views, Controllers and DAOs are application-agnostic and can work with any database table. This architecture meets the criteria of "reason for change" because of the following:

Each component has a single responsibility which can be readily identified, either as a data entity or a function which can be performed on that data, so when someone tells me that I have not achieved the "correct" separation of responsibilities please excuse me when I say that they are talking bullshit out of the wrong end of their alimentary canals.

O - Open/Closed Principle

The Open/Closed Principle (OCP) states that "software entities should be open for extension, but closed for modification". This is actually confusing as there are two different descriptions - the Meyer's Open/Closed Principle and Polymorphic Open/Closed Principle. The idea is that once completed, the implementation of a class can only be modified to correct errors; new or changed features will require that a different class be created. That class could reuse coding from the original class through inheritance.

While this may sound a "good" thing in principle, in practice it can quickly become a real PITA. My main application has over 300 database tables with a different class for each table. These classes are actually subclasses which are derived from a single abstract table class, and this abstract class is quite large as it contains all the standard code to deal with any operation that can be performed on any database table. The subclasses merely contain the specifics for an individual database table. Over the years I have had to modify the abstract table class by changing the code within existing methods or adding new methods. If I followed this principle to the letter I would leave the original abstract class untouched and extend it into another abstract class, but then I would have to go through all my subclass definitions and extend them from the new abstract class so that they had access to the latest changes. I would then end up with a collection of abstract classes which had different behaviours, and each subclass would behave differently depending on which superclass it extended.

It may take time and skill to modify my single abstract table class without causing a problem in any of my 300 subclasses, but it is, in my humble opinion, far better than having to manage a collection of different abstract table classes and then to decide for each subclass from which one of those many alternatives it should inherit. I have never written code to follow this principle simply because I have never seen the benefit in doing so. By not following this principle I do not see any maintenance issues in my code, which means that I do not have any issues to solve by following this principle. On the other hand, if I do start to follow this principle I can foresee the appearance of a whole host of new issues. This principle is not the solution to any particular problem, it is simply the creator of a new set of problems. In my humble opinion this principle is totally worthless as the cost of following it would be greater than the cost of not following it.

According to Craig Larman in his article Protected Variation: The Importance of Being Closed (PDF) the OCP principle is essentially equivalent to the Protected Variation (PV) pattern: "Identify points of predicted variation and create a stable interface around them". OCP and PV are two expressions of the same principle - protection against change to the existing code and design at variation and evolution points - with minor differences in emphasis. This makes much more sense to me - identify some processing which may change over time and put that processing behind a stable interface, so that when the processing does actually change all you have to do is change the implementation which exists behind the interface and not all those places which call it. This is exactly what I have done with all my SQL generation as I have a separate classes for each of the mysql_*, mysqli_*, PostgreSQL, Oracle and SQL Server extensions. I can change one line in my config file which identifies which SQL class to load at runtime, and I can switch between one DBMS and another without having to change a line of code in any of my model classes.

In the same article he also makes this interesting observation:

We can prioritize our goals and strategies as follows:
  1. We wish to save time and money, reduce the introduction of new defects, and reduce the pain and suffering inflicted on overworked developers.
  2. To achieve this, we design to minimize the impact of change.
  3. To minimize change impact, we design with the goal of low coupling.
  4. To design for low coupling, we design for PVs.
Low coupling and PV are just one set of mechanisms to achieve the goals of saving time, money, and so forth. Sometimes, the cost of speculative future proofing to achieve these goals outweighs the cost incurred by a simple, highly coupled "brittle" design that is reworked as necessary in response to true change pressures. That is, the cost of engineering protection at evolution points can be higher than reworking a simple design.

If the need for flexibility and PV is immediately applicable, then applying PV is justified. However, if you're using PV for speculative future proofing or reuse, then deciding which strategy to use is not as clear-cut. Novice developers tend toward brittle designs, and intermediates tend toward overly fancy and flexible generalized ones (in ways that never get used). Experts choose with insight - perhaps choosing a simple and brittle design whose cost of change is balanced against its likelihood. The journey is analogous to the well-known stanza from the Diamond Sutra:


 Before practicing Zen, mountains were mountains and rivers were rivers.
 While practicing Zen, mountains are no longer mountains and rivers are no longer rivers.
 After realization, mountains are mountains and rivers are rivers again.

Even if I do create a core class and modify it directly instead of creating a new subclass, what problem does it cause (apart from offending someone's sensibilities)? If the effort of following this principle has enormous costs but little (or no) pay back, then would it really be worth it?

Like everything else associated with OO, this principle uses definitions which are extremely vague and open to enormous amounts of misinterpretation, as discussed in the following:

OCP - Revised description

I recently came across a pair of articles written by Robert C. Martin (Uncle Bob) which aimed to offer different, and possibly better, interpretations of this principle. In this 2013 article he wrote:

What it means is that you should strive to get your code into a position such that, when behavior changes in expected ways, you don't have to make sweeping changes to all the modules of the system. Ideally, you will be able to add the new behavior by adding new code, and changing little or no old code.

I think that what he is saying here is that if you have developed a proper modular system you should be able to add new functionality by adding a new module and not by modifying an existing module. In this case my implementation of the 3-Tier Architecture achieves this as I can change or add components to the Presentation layer without having to make changes to the Business layer, and I can also change or add components to the Data Access layer without having to make changes to the Business layer. In my implementation of the MVC design pattern I can take the data from a Model and give it to a new View without having to change the Model. I can add new user transactions (use cases) without having to change any existing code. I can add new Model (database table) classes without having to change any existing code.

In this 2014 article he wrote:

You should be able to extend the behavior of a system without having to modify that system.

Think about that very carefully. If the behaviors of all the modules in your system could be extended, without modifying them, then you could add new features to that system without modifying any old code. The features would be added solely by writing new code.

There is a vast plethora of tools that can be easily extended without modifying or redeploying them. We extend them by writing plugins.

While it may be possible to write some software tools which can be extended by the use of plugins, it may not be possible or even practical for other types of software, such as large enterprise applications. It may be possible in some small areas, but not for the entire application. For example, my main enterprise application is an ERP package used by many different customers, and any package developer will know that although the core package will do most of what they want, each customer will want their own customisations. I have managed to deal with these customisations by turning them into plugins. Each customer has his own plugin directory, and each plugin can contain code which is either run instead of or as well as the core code. At runtime the core code looks for the existence of a plugin, and if one is found it is loaded and executed. This means that I can add, change or remove any plugin without touching the core code.

These later explanations, which were brought about by the author's realisation that what he had written earlier had been completely misunderstood, are more understandable because they identify a particular set of circumstances where the principle can be applied and can provide genuine benefits. This is much better than trying to apply the principle blindly in all circumstances, especially those circumstances where it would create more problems than it would actually solve.

L - Liskov Substitution Principle

The Liskov Substitution Principle (LSP) states that "objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program". In geek-speak this is expressed as: "If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program". This is supposed to prevent the problem where a subtype overrides methods in the supertype with different signatures, and these new signatures will not work in the supertype.

This may be difficult to comprehend unless you have an example which violates this rule, and such an example can be found in the Circle-ellipse problem. This can be summarised as follows:

One possible solution would be for the subclass to implement the inapplicable method, but to either return a result of FALSE or to throw an exception. Even though this would circumvent the problem in practice, it would still technically be a violation of LSP, so you will always find someone somewhere who will argue against such a practical and pragmatic approach and insist that the software be rewritten so that it conforms to the principle in the "proper" manner.

Even if your code actually violates this principle, what would be the effect in the real world? If you try to invoke a non-existent method on an object then the program would abort, but as this error would make itself known in the system/QA testing it would be fixed before being released to the outside word. So this type of error would be easily detected and fixed, thus making it a non-problem.

But how does this rule fit in with my application where I have 300 table classes which all inherit from the same abstract class? Should I be expected to substitute the Product subclass with my Customer subclass and still have the application perform as expected?

The Liskov Substitution Principle is also closely related to the concept of Design by Contract (DbC) as it shares the following rules:

Unfortunately not all programming languages have the ability to support DbC, and PHP is not one of them due to the fact it is not statically typed and not compiled. This is discussed in Programming by contracts in PHP. Certain languages, like Eiffel, have direct support for preconditions and postconditions. You can actually declare them, and have the runtime system verify them for you.

There is also one other point to consider - if you only ever inherit from an abstract class then this principle does not apply as you cannot instantiate an abstract class into an object, therefore it does not have any runnable methods. In my framework all my concrete table classes inherit from an abstract table class, so this principle is irrelevant.

I - Interface Segregation Principle

The Interface Segregation Principle (ISP) states that "no client should be forced to depend on interfaces it does not use". This means that very large interfaces should be split into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them.

Note that this principle only applies if you use the keywords "interface" and "implements" in your code as in the following example:

interface iFoobar
{
    public function method1(...);
    public function method2(...);
    public function method3(...);
    public function method4(...);
}

class snafu implements iFoobar
{
    private $vars = array();
  
    public function method1(...)
    {
        .....
    }
  
    public function method2(...)
    {
        .....
    }
		
    public function method3(...)
    {
        .....
    }
  
    public function method4(...)
    {
        .....
    }
}

Here you can see that the interface iFoobar identifies 4 method signatures, and as class snafu implements this interface it must also contain implementations for all 4 of those methods. A problem arises if a class only needs to implement a subset of those methods. For example, if it only needs to implement methods #1, #2 and #3 it still must implement #4 simply because it is defined in the interface.

The solution to this problem is to create smaller interfaces which contain combinations of methods which will always be used. Using the above example this would mean having one interface containing methods #1, #2 and #3 and another containing method #4.

In the PHP language the use of the keywords "interface" and "implements" is entirely optional and unnecessary, and as I do not use them in my framework this principle is irrelevant.

D - Dependency Inversion Principle

The Dependency Inversion Principle (DIP) states that "the programmer should depend upon abstractions and not depend upon concretions". This is pure gobbledygook to me! Another way of saying this is:

This to me is meaningless garbage and is totally confusing as it uses the terms "abstract" and "concrete" in ways which contradict how OO languages have been implemented. My understanding is as follows:

This principle is actually concerned with a specific form of de-coupling of software modules. According to the definition of coupling the aim is to reduce coupling from high or tight to low or loose and not to eliminate it altogether as an application whose modules are completely de-coupled simply will not work. Coupling is also associated with Dependency. The modules in my application are as loosely coupled as it is possible to be, and any further reduction would make the code more complex, less readable and therefore less maintainable.

A more accurate description would be as follows:

Where an object C calls a method in object D then C is dependent on D as it cannot perform its function without consuming a service from object D.

Without Dependency Inversion then object C will identify and instantiate object D within itself.

With Dependency Inversion object D will be instantiated outside of object C and "injected" into it.

The only workable example of this principle which makes any sense to me is the "Copy" program which can be found in Robert C. Martin's The Dependency Inversion Principle (PDF). In this the Copy module is hard-coded to call the readKeyboard module and the writePrinter module. While the readKeyboard and writePrinter modules are both reusable in that they can be used by other modules which need to gain access to the keyboard and the printer, the Copy module is not reusable as it is tied to, or dependent upon, those other two modules. It would not be easy to change the Copy module to read from a different input device or write to a different output device. One method would be to code in a dependency to each new device as it became available, and have some sort of runtime switch which told the Copy module which devices to use, but this would eventually make it bloated and fragile.

The proposed solution is to make the high level Copy module independent of its two low level reader and writer modules. This is done using Dependency Injection (DI) where the reader and writer modules are instantiated outside the Copy module and then injected into it just before it is told to perform the copy operation. In this way the Copy module does not know which devices it is dealing with with, nor does it care. Provided that they each have the relevant read and write methods then it will work. This means that new devices can be created and used with the Copy module without requiring any code changes or even any recompilations of that module.

If you bothered to read the details of Robert C Martin's "Copy" program you would see that this program has two dependents - the "from" device and the "to" device. There can be any number of different device classes from which a device object can be instantiated. At runtime the program does not need to have any knowledge of which devices are being accessed as it uses whatever device objects it is given. Here is an example of how the refactored Copy module could be used:

$objInput = new deviceX(.....);
$objOutput = new deviceY(.....);
$result = $objCopy->copy($objInput, $objOutput);

The "copy" program example clearly shows that Dependency Injection (DI) only has benefits in a particular set of circumstances:

Dependency Injection is only appropriate when a consuming object has a dependency which can be switched at runtime between a number of possible alternatives, and where the list of alternatives can be modified without having to make any corresponding changes to the consuming object.

Notice also that the copy() method in the copy object has arguments for the both the source (input) and target (output) objects which means that the objects are both injected and consumed in a single operation. There is no need to perform the inject() and consume() operations separately, so the idea that the Robert C Martin's article promotes such a thing is completely wrong. This is a prime example of where a relatively simple idea has been corrupted beyond recognition and made much more complicated. The idea that I must manage every one of my dependencies using some complicated Dependency Injection (DI) or Inversion of Control (IoC) mechanism does not fly in my universe as the costs outweigh the benefits.

The "copy" program uses input and output objects which can be supplied from multiple sources, so as far as I am concerned DI is not appropriate in those circumstances where a dependency can only ever be supplied from a single source. As I have explained in my article Dependency Injection is Evil I have found some places in my framework where I make use of DI, but I do not use it everywhere.

Those people who say that DI should be used for every dependency are wrong. There are some circumstances in which it would not be appropriate and in these circumstances an intelligent programmer would not code for a possibility that would never exist.

Composite Reuse Principle (CRP)

Although not included in SOLID, the Composite Reuse Principle, which is often phrased as "favour composition over inheritance", is commonly mentioned as another important OO principle which should be followed. It states that classes should achieve polymorphic behavior and code reuse by their composition (by containing instances of other classes that implement the desired functionality) rather than inheritance from a base or parent class. But what problem is this principle supposed to solve? In Object Composition vs. Inheritance I found the following explanation:

Most designers overuse inheritance, resulting in large inheritance hierarchies that can become hard to deal with. Object composition is a different method of reusing functionality. Objects are composed to achieve more complex functionality. The disadvantage of object composition is that the behavior of the system may be harder to understand just by looking at the source code. A system using object composition may be very dynamic in nature so it may require running the system to get a deeper understanding of how the different objects cooperate.
.....
However, inheritance is still necessary. You cannot always get all the necessary functionality by assembling existing components.
.....
The disadvantage of class inheritance is that the subclass becomes dependent on the parent class implementation. This makes it harder to reuse the subclass, especially if part of the inherited implementation is no longer desirable. ... One way around this problem is to only inherit from abstract classes.

I choose to ignore this principle for two very good reasons:

Polymorphism allows the same method name to be used on different objects, and is obtained by each of the classes from which those objects are instantiated having the same method name with the same signature. The method name may be defined manually within each class, or it can be defined in an abstract class which is then inherited. Each of my Model/Table classes inherits from my single abstract table class, which means that my inheritance hierarchy is only one level deep. As it is obvious that I don't have the problem which this principle was designed to solve, it should also be obvious that it provides a solution which I do not need.

Conclusion

Like any other design pattern each of these principles has been formulated to offer a solution to a specific problem, and this leads me to make the following observations:

If you like to follow rules in the belief that they will make you a better programmer, then perhaps you might like to look at these:

Following these principles with blind obedience and implementing them without question is no guarantee that your software will be perfect. As I said in the introduction different OO "experts" have different opinions as to what is the "right" way and what is the "wrong" way depending on whether they are moderates or extremists. It is simply not possible to follow one person's opinion without offending someone else. If you don't follow the SOLID principles someone will be offended. Even if you do attempt to follow them someone else will jump up and say "Your implementation is wrong!", or "Your implementation goes too far!" or "Your implementation does not go far enough!" or "Don't do it like that, do it like this!" It is simply not possible to find a solution which satisfies everyone and offends no one, and if you attempt to do so you may end up in the situation described in The Man, The Boy, and The Donkey where the punch line is "if you try to please everyone you may as well kiss your ass goodbye!"

If it is not possible to please everyone then what can you do? The simple answer is to please yourself - ignore everyone else and do what you think is best for your particular circumstances. After all, it is you building the software, not them. You are the one who is going to deploy and maintain it, not them. It is your ass on the line, not theirs.

In the article 10 Modern Software Over-Engineering Mistakes the author makes this observation with point #5:

Blindly applying Quality concepts (like changing all variables to "private final", writing an interface for all classes, etc) is NOT going to make code magically better.

Check Hello World. It has a gazillion code. In the micro-level each class follows SOLID principles, uses all sorts of great Design patterns (factory, builder, strategy, etc) and coding techniques (generics, enums, etc). It gets high Code quality ratings from CQM tools.

But if we take a step back, this prints "Hello World"!

Later he says the following:

Concepts like SOLID came up in response to abuse of Inheritance and other OOP concepts. Most engineers are unaware of where/why these concepts came from, but just end up following the memo.

When I was a junior programmer I had to follow the lead set by my so-called "superiors", but I kept hitting obstacles that their methodologies created. When I proposed solutions to these obstacles I was constantly put down with "You can't do that as it is against the rules!" or sometimes "How dare you have the audacity to question the rules! Don't think about them, just do as you're told!" When I became senior enough to create my own methodology I concentrated on what I needed to do to get the job done with as few of these known obstacles as possible, and in the process I found myself throwing more and more of these silly rules into the waste bin. When others see that I am not following "their" set of rules they instantly accuse me of being "wrong", "impure" and a heretic, but why should I care? I am results-oriented, not rules-oriented, I am a pragmatist, not a dogmatist, so the fact that I have created software which is powerful, flexible, extensible and maintainable is all the justification that I need. If it works, and if the effort required to keep it working and update it is minimal, then how can it possibly be "wrong"? I have seen projects fail because too much attention was focussed on the rules instead of the results, so if something fails how can it possibly be "right"?

Here endeth the lesson. Don't applaud, just throw money.

References


© Tony Marston
8th June 2011

http://www.tonymarston.net
http://www.radicore.org

Amendment history:

01 Nov 2016 Amended Single Responsibility Principle to include a quote from AnemicDomainModel by Martin Fowler.
01 Oct 2016 Amended The "copy" program to show that the dependent objects are not injected and consumed in separate operations.
01 Apr 2016 Updated Single Responsibility Principle (SRP) to show that if it is applied in an over-zealous way this can actually be counter-productive.
Updated Open/Closed Principle (OCP) to show that the cost of following the principle may be greater than not following the principle.
Added OCP - Revised description.
Updated Liskov Substitution Principle (LSP) to show that it is irrelevant if you inherit from an abstract class.
Updated Interface Segregation Principle (ISP) to show that it is irrelevant if you don't use the keywords "interface" and "implements".
Updated Dependency Inversion Principle (DIP) to show that its use is only appropriate in certain circumstances.
Added Composite Reuse Principle (CRP)
02 Jul 2015 Added How to separate responsibilities and concerns to indicate how SRP/SoC can be applied.

counter