Re: Run-time efficiency: Branch prediction
Rune Allnor schrieb:
So the simple answer is that one should write naive
code and let the compiler earn its salary?
The question is: What do you mean with "writing naive"
code? In your original problem description is not much
written, which could be interpreted as a simple-minded
decision. If we look at your loop construction
for (n= 0; n<N; n++)
and assume for a while, that n would be a user defined
type emulating integral types, e.g. an integer of arbitrary
precision, one could argue about the costs of the above
post-increment compared to a pre-increment. Under
given circumstances this could be an unnecessary
naive decision. Other indeed naive decisions are often
related to wrongly chosen algorithms or containers for
a given problem (concerning complexity of operations).
Some years ago we stumbled across a convincing
example which clearly demonstrated that a wrongly
chosen algorithm could "freeze" an otherwise correctly
written program: We decided to use a special 3rd-party
List control in our program. That control also had the
nice feature for providing tooltips for every list cell.
According to our user requirements we should use
tooltips to show every truncated cell content in full detail.
After some days in the testing phase one of our testers
came to demonstrate us that our program would
suddenly "freeze" and they needed to "kill" the
corresponding process. Fortunately we had access to
the source code also of the list control and a short
analysis came to the following result:
The code which prepared the tool-tip text used a special
algorithm which converted the tooltip text in different
font sizes to choose the best size to fulfill some
text and/or display constraints (I forgot the details).
Therefore the algorithm *always* started with the smallest
(or largest) font size and converted the text into that size,
did some calculations and used the *following* size in the
next iteration step until al sizes had been tested and the
best one (depending on some criteria of those calculations)
was used. Because there existed a lot of possible sizes
of that font, this iteration could need *several minutes* for
a reasonably large text until the correct font size was
found. The very simple fix was that we replaced this
successive iteration of font sizes by a *binary search* of
those (thereby using exactly the same API and maybe
one line more code) - and we did not have this problem
anymore!
Greetings from Bremen,
Daniel Kr?gler
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]