Re: Preventing Denial of Service Attack In IPC Serialization
On Jun 13, 8:51 am, "Nevin :-] Liber" <n...@eviloverlord.com> wrote:
For the sake of argument, let's talk about about sending a non-simple
structure, such as a vector<string>.
Ok, so in this case, we would probably serialize a vector template by
sending the count of elements in the vector first, followed by the
serialization of each individual element.
At the source end of the connection:
socket << s1.count();
// Now export "count" strings.
for (unsigned int i = 0; i < s1.count(); ++i)
socket << v1[i];
At the target end of the connection:
// Code to make enough vector space for "count" strings
// Code to import "count" strings.
A serialization constructor for vector<> is probably best way to do
this, but this is the general idea.
Even if you determine that it would be a DoS attack in requesting too
much memory, how exactly do you reject a message?
Well...that's just it. If one takes generic pre-written serialization
code for class File, and try to use it against Socket, it will not be
known that there is a DoS attack. And whether there is an attack or
not, excessive memory allocation, accidental or intentional, will be
indeterminate. There will be no point in the code where just before
invocation of operator new() one will be able to say...
5MB!!!!...that's too much...something must be wrong.
What if it is a different DoS attack, such as a bad count of elements
(either in a given string and/or in the vector itself)?
Correct observation. Here checking of the imported data makes sense.
Notice the distinction between making space for too much data using
operator new(), and checking the data after it has been received. In
your vector<string> example above, the vector might contain the names
of people that are suspected of terrorist activity and should be
placed under surveillance. But if one of the names is George W. Bush,
then someone probably made a gross mistake or is pulling a prank, so
the object containing the vector would throw and exception due to
faulty data. It is important to note that such checking would
normally occur after space has already been allocated for the entire
W/o framing, checksums, etc., you are pretty much hosed, whether or not
you use serialization. How do you plan on syncing up with the next
One could take a C++ object implant it using any of a number of
formats on the wire. FYI, in my own scheme, if I have to send an
object, I serialize its elements one-by-one to a buffer. I follow
this principle recursively until the field of an object is a vector or
a scalar, at which point it is trivial to write the elements with
appropriate counts of total elements, elements in this fragment of
vector, etc. It should be evident that
And if you add framing and checksums, you are talking about packets, not
just raw sockets...
I am puzzled why some think that the intermediate state of a
serialized object in transit has any affect on the problem outlined in
my original post. No matter what the intermediate state, no matter it
it is sent over packets that allow only 16 bytes total at a time, the
problem still persists. In the end, the source is sending a
vector<string>. The target is receiving a vector<string>. The source
determines how many elements are in the vector<>. And the target,
without any arbitrary thresholds, imports each string into the vector
one-by-one. If the transmission rate of the link is 1 bit every 15
years, that does not matter. After a long time, if the source
declares that there are 1,000,000 strings, each of length 1,000; then
1GB of data will be sent. This will happen no matter what the format
of the data is on the wire.
-Le Chaud Lapin-
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]