Testing is a big part of the development of any software. Today, we’ll talk about a specific type of testing employed by developers to ensure source code can scale better.
Unit testing is an overhead on the top of software development – the developers write and run automated tests for each ‘unit’, i.e. function. Why is this done? Why can’t we just ‘manually’ test the system to see that the “features work”?
Imagine this simple example: you have a website with Login and Chat functions. Let’s also say that some time later, the developer who created the website returns to it to add a Map function. For that, they may have to change the Chat function and update the Login function, otherwise, the website will crash when you add the Map function. The three functions exist on the same platform, and they have to fit into the code.
Now imagine an application with hundreds or thousands of functions. How would Developer Joe know the dependencies of functions that Developer Alice has created? How would Joe even remember all the dependencies between functions he has programmed in the last months?
Unit tests show the dependencies (and their hierarchy) between functions as they check both the functions themselves and compound behavior between those. And it does this in an automated and ‘independent’ way.
Each test case is tested independently to identify possible issues. In this process, developers often use assistance in the form of method stubs (a piece of code standing in for another programming functionality), mock objects (simulated objects mimicking the behavior of real ones in controlled ways), and test harnesses (test data configured to test a unit by running it under varying conditions and monitoring its behavior).
The developer may include desirable criteria in the test code, so that the frameworks report tests that fail any such criterion.
Is it necessary?
Unit testing is an expensive overhead. At Slash we assume it adds around ~30% of time on a unit of development. So if a function takes 1 day to code, the unit testing would take 0.3 days, and the total delivery of this function would be 1.3 days.
This overhead time has to be costed in the project timeline and project budget. Since it is additional work and expense, clients sometimes ask if it can be skipped. This is a tricky question and depends on the mindset and strategic objective of a client. What is the client building?
There are 2 key dimensions to consider:
- How complex is the system?
- What is the expected organic evolution (and evolution rate) of the system in the future?
If a system is simple and one-off (i.e it won’t need much updating anymore), then perhaps the risk of skipping unit testing is smaller. If a system however is complex, unit testing becomes more important – especially if the client expects that the system will need to continuously evolve in the future.
It should be noted that the exact amount of overhead (say 30%) depends to an extent on the unit testing ‘coverage’. For example: are you covering 100% of the written functions with your unit testing, or only 75%? This is an IT policy decision based on the cost-benefit analysis and risk appetite of the IT organization.
If required unit testing is skipped, there is a high risk of bug generation and regression issues. What it means in practice is if minor faults in the code go unnoticed, the whole development process may be at risk to go 3 steps forward and then take 2 steps backward.
Another aspect is that without unit tests, overall testing time stretches longer because when you introduce a new feature, all existing ones may have to be tested more manually.
And retroactively doing unit testing on past codes, can lead to a lot of functions that may require some “refactoring” (re-writing/re-work). That can be a painful and costly exercise, and not something that many developers will enjoy doing.
The system does not have to be live for the developers to execute unit tests. Moreover, the software does not even have to be completely built.
Unit tests discover bugs and compatibility issues beforehand, reducing the overall time spent on testing. Compatibility is very important in this context, as mentioned in the example with three functions – you want to be sure all functions work at the same time instead of finding it out when the system goes live and rushing to refactor or fix code.
In addition, the results of unit tests provide valuable data on which module secures which functionality. The use of software components is better documented, and that documentation can be referred for information about correct ways to use each component.
Is it worth it?
Ultimately, what unit testing does is ensure that the entire code is compliant with the required quality standard before the system or new functions go live.
The cost-benefit analysis for unit testing is driven by many factors, including the complexity and expectation of future evolution of the system. Is preventing failure now more beneficial than correcting errors in the future?
Tag CloudAgile - Agile Delivery - AI - Animal Framework - Autonomous weapons - B2B - blockchain - Clean code - Client consulting - cloud platform - Code Refactoring - coding - Computer Vision - cryptocurrencies - Deepfakes - Deep Learning - DeepMind - Design Research - Developer Path - DevOps - Digital Ownership - founder equality - founder equity - front end developer - Fullstack Engineer - Growth strategy - Hook model - innovation - Manual Testing - Metaverse - methodology - Mobile Engineer - Natural Language Processing - NFT - NLP - playbooks - Podcast - product versions - project management - Quantum Computing - Recruitments - Remote Work - Robotics - Sales machine - Self-Driving Cars - Slash - Software Development - Software Engineering - teamwork - Tech Talks - tech teams - testing playbook - The Phoenix Project - Unit testing - VB Map podcast - Venture Building - virtual retreat - Web3