Efficiency of improving testability versus adding unit tests
My team is going to add unit tests to old code without modifying it, to pave way for future design improvements. One of the short term goals is to also reduce the number of newly introduced bugs, because the new unit tests will detect the errors in the new code by failing. While I fully support this project, I am sceptical about its efficiency as a strategy for reducing the introduction of bugs. I believe the main mechanism that unit tests reduce bugs is a because unit tests nudges the code to become more testable, and testability implies many other code qualities, such as SOLID. I argue that it is this indirect improvement of code quality that is the main reason for why unit testing reduces bugs. Therefore, my question is: does the reduction in bugs in unit tested code come mainly from the unit tests detecting bugs or mainly from indirect code quality improvements induced by improving testability before the unit tests are written?
My team is going to add unit tests to old code without modifying it, to pave way for future design improvements. One of the short term goals is to also reduce the number of newly introduced bugs, because the new unit tests will detect the errors in the new code by failing. While I fully support this project, I am sceptical about its efficiency as a strategy for reducing the introduction of bugs. I believe the main mechanism that unit tests reduce bugs is a because unit tests nudges the code to become more testable, and testability implies many other code qualities, such as SOLID. I argue that it is this indirect improvement of code quality that is the main reason for why unit testing reduces bugs.
Therefore, my question is: does the reduction in bugs in unit tested code come mainly from the unit tests detecting bugs or mainly from indirect code quality improvements induced by improving testability before the unit tests are written?