Re: Unit Testing Frameworks (was Re: Singletons)
On 1/2/2013 6:34 AM, Richard wrote:
In my experience, the main problem that singletons create for unit
testing in C++ is that singletons are just another name for global
variable. I don't think anyone questions the idea that having your
code coupled to global variables makes it harder to unit test. I
don't see why the assertion that having your code coupled to
singletons makes it hard to test is any more difficult to understand.
Then I guess you similarly don't see how functions create an abstraction
layer. Do you make all the data in classes public as well? Why not? you
suggest there is no difference...
The other problem with singletons
Hm, what was the FIRST problem again? I mean with singletons, not your
inability to see a difference.
is that people just assume those
resources will be cleaned up when the process exits.
That sounds like a problem with the implied people.
This means that
if you have two unit tests that both interact with the same singleton,
then executing one test can perturb the results of the other test. It
becomes more difficult, if not impossible, to reset the environment
between tests without launching each test as its own executable.
Did you read at least the starting post of this subthread? The objects
used in the test cases were created on the stack frame, so could not
possibly escape for crosstalk. More importantly it was shown that you
can provide the exact same object through the singleton access function
that you would pass as function argument.
So any problems related to lifetime or crosstalk would be the same using
the suggested cool alternatives. Or the lack of them as well.
Certainly the method required to use the access function instead of a
direct global variable. Kinda proving that there is significant
difference and your claim at the front is largely mistaken.
You may be writing your singletons in such a manner as to sidestep all
these problems and therefore you don't understand why someone would
say that singletons are PITA when it comes to unit testing. However,
if this is the case I would assert that you are in a very small
minority when it comes to people who write singletons.
Possibly so, but that should lead to abandon bashing singletons and
focus on the REAL issue: the problems dragged in these discussions are
normally manifestation of the "Idiot at the wheel" or the "Set, Fire,
Aim" pattern (antipattern). Where messing up singletons or globals is
just a tiny spot.
And the cure is hardly bashing singletons, or especially suggesting to
replace them with potentially even more painful tactics like DI, but to
either weed out the incompetent or train to have UNDERSTANDING. Of
methods and consequences. Or how to look at alternatives, and evaluate
their actual pros/cons, instead of empty claim words.
In fact, I can't think of a single case of trying to cover legacy code
with unit tests where the code was easy to test if it interacted with
a singleton.
Huh, I think most people with industrial experience would state the same
leaving out the 'with singletons' part. Let's try you: how many legacy
systems did you find where code was easy to test? (Well, there's no real
consensus on meaning of 'legacy code', but one currently pretty popular
book, Feathers' "Working with legacy code" *defines* it as stuff
without tests. ;-)
IME all legacy code I looked at was messed up in a zillion ways,
including accidental and deliberate things. Presence of singletons
hardly made the problem top 20 list. Probably not even plain old
globals. And the problems about writing tests started not even at
technical level but at missing information about what the desired
behavior is. Then followed by general spaghetti effect and especially
UI coupling. Anyone solving those problems would just laugh at globals.
Invariably, the easiest way to decouple such legacy code
is to extract an interface around the singleton and then have the SUT
interact with the interace which it receives via dependency injection.
What is pretty much like famous picture of the waterfall water
conveniently flowing down from its bottom to top. To start refactoring
you'd need a system that is correct, AND have tests too to make it safe.
In reality you start without any of that. Any modification to the code
is an unmitigated risk. Also with DI you dig a serious hole into the system.
While in C++ we can use non-intrusive methods of testing. Or intrusion
limited packaging. Or limited to test environment.
No, DI is absolutely not something you NEED to start testing.
DI may have its uses, but I surely would not trust anyone without very
good understanding of all the alternatives to do modification on a
carefully BALANCED legacy system. Especially for a sake of some theory.
That doesn't eliminate the singleton, but it does effectively make it
not look like a singleton from the perspective of the consuming class.
What is a serious change of behavior -- the information of 'same
instance' is lost, and the system must start dealing with possible
ALIASING. On top of any unmanaged complexity it already has.
Or just take the lazy approach assuming that all the pointers are really
carrying the same value until someone starts believing that all those
pointers surely have some purpose other than obfuscation and posts a
different instance. In the live system, not some stripped and isolated
test cases. Guess how happy the next maintainer of the system (with my
luck probably ME, as other inventors have tendency for the "take the
money and run" approach, or just fold after some time) will be?
Once all the consumers are treating the singleton just like any other
class, you're left asking yourself the question of why you bothered to
go through the trouble to enforce the singleton nature of it in the
first place.
Ok, unlike the other DI proponents, please be the first one explaining
why aliasing is not a problem. The example is there little upstream,
let's look again: having foo(A* a, B* b) that formerly called
GetLogger().Log() will now call a-> or b-> or both or start some
discovery logic?
Every time a singleton is suggested, it isn't suggested because there
is a NEED to enforce that there be one and only one instance of the
object, the argument is always made because someone observes that at
this time we only need a single instance. Why it can't just be a
single instance of an ordinary object and has to be forced into a
requirement that there only ever be a single instance of this object
is something that people can't seem to explain to me when they are
proposing singletons.
guess they don't start to explain how a one-to-one relation differs from
one-to-many ond that from many-to-many. Or why not use a vector<A> with
a single element instead of just A, when the first can always cover the
other's job.
DRY SPOT anyone? If the application design actually states there is one
and exactly one item, singleton is actually what creates the single
point of truth, while the zillion copies of the address created violate
that. How about simplicity? Economy?
I don't know why this design pattern has this
kind of seduction to it, but some people are certainly in love with
it. These same people would shudder if I suggested adding global
variables to the application, but I have yet to be convinced that
singleton is anything other than just another name for global
variable.
You know, most people who can do the work are normally engaged in
activity actually doing it. Education of ignorants is most times not
part of it -- time is better spent on those actually WANT to know and
approach with a ton of WHY questions and opened ears.
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]