Do frameworks always work? Do they mostly work? Do they sometimes work? Do we even really look to see if they work?
Or are they just a lot of work?
There is an assumption amongst software developers that adding a framework will automatically save time and reduce complexity. That if a framework follows standards, and reduces the amount of code that has to be written, the project will automatically become “best practice”.
It is safe to say that developers are generally pretty good at determining what works and what is broken. When something is broken, they are pretty good at isolating the issue and finding solutions. Most developers, if given half a chance, will do a thorough job and will be able to identify when things are poorly done.
In contrast, when it comes to deciding on tools and frameworks to use, these methodical analytical skills seem to evaporate. It is as though a fog descends, and all that can be seen is “will I need to write less code?”. This is reinforced by soft, ethereal, reassuring words like, “don’t reinvent the wheel”, “we don’t want to build a …” and “this problem has been solved, let’s do what other people are doing”. As a consequence, hearsay and gut feel take the stage, and logical analysis is left to play the understudy.
Well today, the lead is going to ‘break a leg’.
Before continuing, I want to make it clear that I am not questioning any particular technology, but rather the process of assessing technologies. This includes the choice of technology, the assessment of the quality of that choice, and the way we learn from previously made choices.
Let us start by looking at the steps that make up the development of a product:
- Requirements are gathered.
- Tools / APIs / Frameworks are chosen.
- There is a learning process for third party code.
- Things are installed.
- Things are integrated.
- Things are customised.
- Product implementation code is written.
- Features are completed and then tested.
- Bugs are fixed.
- Product is deployed.
- Users get involved.
- Customer service gets involved.
- Requirements change.
- New requirements are gathered.
- And so on….
Now let’s look at some of the things which an individual developer needs to do during the life cycle of the product:
- Download and install software and plugins.
- Edit configuration files, and changes preferences and settings.
- Read third party documentation.
- Google to find for answers to problems where documentation is lacking.
- Read third party source code, where documentation and Google are lacking.
- Bug fix third party code.
- Write extensions to tailor third party code.
- Read logs and source for the third part code, to fix integration issues.
- Google, read documentation, and read source code, to work out how to enable useful logging.
- Write some “Hello World” code to test the installation and integration with the third part code.
- Step through the source code with a debugger to try and explain the exceptions the logging is reporting.
- Google/Logs/Debugger, when “Hello World” does not seem to be being activated.
- Write extra annotations, and update configuration files.
- Document the process, so the next developer can achieve “Hello World” faster.
As time consuming as this is though, the pay-off can be expected in the form of productivity increases for subsequent developers. All they need to do is:
- Read the documentation written by the first developers
- Do almost all of the steps in the above list for the first developer.
- Update the documentation with things that were missed.
- Repeat part of the overall process due to environmental or operating systems differences.
After this, the core implementation stage can now start:
- As exceptions occur, repeat most of the second half of the first developers list.
- Intermittently apply the same process to debugging third party development tools.
- Struggle to test code in the absence of a fully running system.
- Learn some third party testing tools, APIs or frameworks.
- Discover that new testing tools interfere with existing development tools.
- Read third party documentation, and consult Google.
- Discover that new testing tools break automated build process.
- Read build infrastructure documentation, and consult Google.
- Deploy to testing environment.
Up until this point, the amount of time spent actually writing code in core language is actually quite small compared to the time spent doing all of the other things on these lists. Yet for some reason this is still accepted as “good practice”!
Sadly this is not where things starts to get better. Although investment has been made in the chosen technologies, both in skills development and in completing required integration, the framework reveals something that was not written ‘on the box’. Gestalt Complexity; The resulting complexity is greater than the sum complexity of its parts. All is well if you want the solution that was advertised, if you need to stray from this then all bets are off.
This Gestalt Complexity become very evident at three points. When the product is handed over to production support, when requirements change or new features are added, and when parts of the framework need to be swapped out. It is at this time that developers are once again drawn back to working their way through lists of painful tasks that do not include coding in any of the core languages.
It should be noted that frameworks are really not that special. They usually just bundle a set of standards-based APIs with a bootstrap class to tie them all together and assume control. The frameworks generally differ in which APIs are needed by the framework’s target users, and are configured to suit the biggest subgroup of those users. The larger frameworks will support more APIs, and will have a richer set of configuration DSLs, but they are still just a collection of APIs.
If you are building an application where a framework has been explicitly built to achieve the same goal as you, and the product requirements are not going to stray, there may be advantages in adding a framework at the top of the food chain. But in most cases, with regards to product development, it is better remain at the top of the food chain yourself, and deal directly with the standards based APIs. If you want to be lean and agile, you need good tools and APIs, and most importantly you need to be in complete control.
I would like to propose a list which can be used to analyse the cost of a framework or API in a more methodical way. I will concentrate on the perspective that seems to be overlooked when assessments are done, rather than the ones that people already consider.
When something goes wrong with a framework, how hard is it to identify whether:
- A configuration option is missing?
- Configuration is invalid?
- Configuration is syntactically wrong?
- Annotation is missing?
- There is a conflict on the class path?
- Required resource not available: database, file system, etc?
- The database schema is invalid?
- Something is causing Runtime exceptions like Out Of Memory, file system full, network unavailable?
For the above list of common mistakes or misunderstandings, how much effort is required to identify the cause of the problem?
- A log message is informative (enough).
- Documentation is required.
- Previous experience in the framework is required.
- Another person is required
- A Specialist is required.
- Google is required.
- Reading third party source code is required.
- Luck is required.
- Determination and a substantial amount of resources is required.
To conclude, I would like to pose a question which I really would like to know the answer to.
Why do Java developers so eagerly hand everything over to frameworks, only to spend their time configuring and controlling third party code, with third party DSLs, instead of writing Java?
I don’t know about you, but I quite like Java.