Patricia Shanahan wrote:
Peter Duniho wrote:
Sheer quantity of code isn't really a justification for leaving
errors in it.
Kevin McMurtrie wrote:
Tell your boss you want to spend a full year fixing Java 1.5 warnings
because there's a 1 in 600 chance that each is a bug. Next convince
your boss that the risk in fixing them is lower than the original 1
in 600 odds.
Oh, yeah, leave the bugs in the code. That's a *real* professional
I agree that it is a really professional attitude. *Every* change to
code carries a risk of creating a bug. Clean existing code and a good
development process reduce the risk, but it cannot be reduced to zero.
Every change also has a cost in programmer time and effort that could
have been spent on something else.
I've seen important releases, with real bug fixes and features the users
wanted, delayed because someone decided to clean up a harmless departure
from technical desirability, and introduced a new bug in the process.
A professional programmer should step beyond purely technical issues to
think about whether the risk and cost associated with a change are
justified by the benefit.
I do not mean to imply one should never clean up warnings etc. I
strongly prefer warning-free code. Neither the cost nor the benefit of
clean-up is zero, so it is a non-trivial decision.
What you suggest is valid and certainly not using "sheer quantity of code" as
a justification. However, the attitude that "mere" warnings are acceptable
does lead to real bugs that cause trouble in production. Where did Kevin's "1
in 600 chance" assessment come from? Statistical analysis? I feel certain
not, rather, it was pulled from a particular posterior region.
99% (another posteriorily postulated probability) of the time someone
recommends simply not fixing warnings they are ignoring real bugs.
I've seen this in my current large-team project and in past such projects,
where a request for customer A's data resulted in data from customer B's
records because people ignored "trivial" warnings, or merely left out simple
code analysis that would have revealed trouble that didn't raise a compiler
A professional programmer should step beyond purely arbitrary (and specious)
characterizations of risk and cost associated with a change to understand that
*every* failure to change code that has warnings carries a risk of
perpetuating a bug. I agree that risk cannot be reduced to zero, but leaving
known warnings in code increases that risk.
Code should not be released into the code base in the first place with
warnings showing. If that's done, then the question of the risk of change
becomes moot, because there's nothing to change. When that principle is
ignored, the technical debt mounts until people can aver that the "risk of
cleaning up" is too high to shoulder. In practice, I've seen much more
failure to perform correctly arise from leaving known bugs in than from
cleaning them up.
Once they're in, then the cost of repair is high, but there are mitigation
strategies even so. Again using my current project as an example, where we
have multiple warnings (from the compiler and from Findbugs, a /sine qua non/
for robust code) the reviewers recommend to management a strategy that cleans
up the most critical bugs right away, but defers others to a refactoring
timetable that also improves fundamental algorithms or updates to later
features (e.g., JPA over old-style ORM code). Combination strategies like
fixing the algorithm often reap much larger benefits than simple patches for
similar cost, including simplification of the code to reduce the risk of
introducing new bugs.
Just because one maintainer was careless with a fix doesn't mean that the fix
itself was a bad idea. Zero defects should be the goal rather than
resignation to poor quality. You can't use "it's impossible" as an excuse not
to aim for it. And you most certainly can't use arbitrary and unsupported
fake statistics to support a cost/benefit analysis.