Re: STL objects and binary compatibility
James Kanze <james.kanze@gmail.com> wrote in
news:af6d7bb2-944e-4e8d-87eb-a74731b9f7ad@27g2000hsf.googlegroups.com:
On 12 mai, 08:15, Paavo Helde <nob...@ebi.ee> wrote:
James Kanze <james.ka...@gmail.com> wrote in
news:1ca237c6-8137-4915-a076-
a06dd96c2...@w7g2000hsa.googlegroups.com:
[concerning the _SECURE_SCL option in VC++...]
IMHO it should be switched off in optimized builds (what
would be the point of optimization otherwise?).
Well, obviously, if the profiler shows you have performance
problems due to the extra checks, then you'll have to turn it
I'm developing kind of library which will be used and reused
in yet unknown situations. Performance is often the issue. So
if I can gain any speed by a so simple way as turning off a
nearly useless (at least for me - this feature has produced
only false alarms for me so far) compiler feature, I will
certainly want to do that.
Can you point out an example of when it produces a false alarm?
I've almost no experience with it (since I develope almost
exclusively under Unix), but I know that the various checking
options in g++ have turned up a few errors in my own code
(mostly the concept checking), and have made others (which would
have been caught by the unit tests) far easier to localize and
fix. And I've never seen a false alarm from it.
Maybe I should have been more precise. The alarms are not "false" in the
sense that the code has UB by the standard. On the other hand, the code
appears to have defined meaning and guaranteed behavior by the same
implementation which raises the alarms. Example 1:
#include <vector>
#include <iostream>
#include <ostream>
double summate(const double* from, const double* to) {
double sum=0.0;
for(const double* p=from; p!=to; ++p) {
sum += *p;
}
return sum;
}
int main() {
std::vector<double> v;
v.push_back(3.1415926);
v.push_back(2.7182818);
size_t n=v.size();
std::cout << summate(&v[0], &v[n]) << "\n";
}
In real life, summate() is some legacy function accepting double*
pointers, which has to be interfaced with a new std::vector array used
for better memory management. The expression &v[n] is formally UB. VC++
on XP crashes the application with the message "test.exe has encountered
a problem and needs to close. We are sorry for the inconvenience.". If
_SECURE_SCL is defined to 0, the program runs nicely and produces the
expected results.
Example 2:
#include <vector>
#include <iostream>
#include <ostream>
double summate(const double* arr, size_t n) {
double sum=0.0;
for(size_t i=0; i<n; ++i) {
sum += arr[i];
}
return sum;
}
int main() {
std::vector<double> v;
// ...
std::cout << summate(&v[0], v.size()) << "\n";
}
This fails in the similar fashion, but only on an important customer
demo, where the size of array happens to be zero.
It appears that VC++ is warning me against potential problems which might
happen on exotic hardware (or on common hardware in exotic modes), but
does this on the most unpleasant way, crashing on run-time the
application which would have run nicely otherwise.
off. I'm not familiar enough with VC++ to be sure, but I rather
suspect that you have to compile everything with the same value
for it; in other words, that it affects binary compatibility.
Yes.
Which means that you'll probably have to either deliver the
sources, and let the user compile it with whatever options he
uses, or provide several versions of it---how many I don't know.
Providing the sources is out of question by company rules. Providing
several versions is out of question because of lack of resources.
From what I've been led to believe, if you use the default
settings for release and debug builds in Visual Studios, you
don't have binary compatibility between the two. I'll admit
In this case, the Checked Iterators are enabled by default in
both builds,
Which is simply false. Checked iterators are enabled if you ask
for them; you can enable them in all of the builds you do,
disable them in all, or use any combination you find
appropriate. (I enable them in all my builds.) You choose the
options you compile with; there's nothing which forces you one
way or the other.
Cite from http://msdn.microsoft.com/en-us/library/aa985965.aspx
"Checked iterators apply to release builds and debug builds.
....
The default value for _SECURE_SCL is 1, meaning checked iterators are
enabled by default."
thus probably being binary compatible (never tried).
There are a number of different things which can affect binary
compatibility. This is just one. As I mentionned earlier, the
only sure answer is to compile everything with exactly the same
options (which rather argues for delivering sources).
I would say this would be the only justification of this
feature (to be included in the Release build). OTOH, in order
to mix Debug and Release builds in such a way one has to solve
the conflicts appearing from multiple runtime libraries first,
which is not an easy task (and not solvable by command-line
switched AFAIK).
What libraries you link with very definitely is controlled by
the command line. How else could they be controlled, since the
only way to invoke the compiler is by the command line? (All
Visual Studios does is create the equivalent of a command line
invocation.) Look at the /M options.
Yes that's true in theory, but not in practice. It is possible to link
with the run-time libraries meant for another build. And no, this is not
simpler. For starters, VC++ wizards generate code (can be deleted of
course) to redefine the "new" keyword in Debug builds, which would cause
link errors when attempting to link to the Release-build run-time
library. All modules exchanging dynamically allocated objects should
better be linked to the same run-time library - if the choice is not the
default one it is very hard to achieve. And by abandoning the debug
library one abandons also the debugging features it offers.
Compared with the simplicity and flexibility of the Linux/ELF dynamic
linking process (cf. libefence) it's a shame. I know your answer - use
static linking and monolithic applications. Unfortunately this is not
always an option.
[...]
Of course, both _GLIBCXX_DEBUG and _SECURE_SCL are squarely in
the implementation namespace. Any code which does anything with
them is invoking undefined behavior. So the answer to that is:
don't do it (and don't use any third party code which is so
poorly written as to do it).
This is UB only formally, only from POV of C++ standard. These
macros are documented in the compilers' documentation, so any
code which uses them has implementation-defined behavior.
Yes, but doesn't the documentation more or less say (or at least
imply) that they should be set in the command line, and not by
means of #define's/#undef's in the code. I can imagine that
Cite from http://msdn.microsoft.com/en-us/library/aa985896.aspx
To enable checked iterators, set _SECURE_SCL to 1:
#define _SECURE_SCL 1
To disable checked iterators, set _SECURE_SCL to 0:
#define _SECURE_SCL 0
--
Paavo