Re: Preventing Denial of Service Attack In IPC Serialization
On Jun 20, 10:35 am, jlind...@hotmail.com wrote:
I can at least say that, thanks to your clear explanation of your
point of view, I am now sure that you were thinking what I thought you
were thinking, whereas before, I was not 100% sure. I still think,
however, that it does not address the fundamental issues.
If the sender is in a situation where it needs to send a message to
Here's an example. Lets say the sender needs to be able to send really
long std::string's . You could have the sender request a handle from
the receiver, and then send the string in several substrings, each
time accompanied by the handle. The receiver uses the handle to
identify previously sent substrings, joins them together, and
eventually the whole string will have been transferred to the
receiver. All this takes place in the _application_, far removed from
the serialization code.
Excellent example. Let us say that, in non-adversarial situation, the
typical length of a string is 64 bytes. Let us say that in the
adversarial situation, the length of a string jumps to 5MB.
Now before you jump in and say "Aha! DOS vulnerability!", just
remember that the protocol above has absolutely nothing to do with the
serialization code. The DOS vulnerability is the same as that of _any_
network server, regardless of whether it uses C++ serialization
frameworks or not. The DOS vulnerability can be addressed in any of a
number of ways, without touching the serialization code _at all_ . Not
even a single line of it.
That is the whole point of my original post, to point out that blind
use of serialization incites the vulnerability. Also, I state again:
there are many, many programmers who not only want, but expect, to be
able to use serialization code written for a File "archive" against a
At the very least...
....the importance of this thread is to let them know that they should
not, because right now, that is exactly what they are doing. So if
anything, this thread exposes a behavior that might be best avoided.
However, personally, I am not entirely convinced that one cannot have
his cake and eat it too, after discussions with Jeff Koftinoff.
First, surely you will admit that there are many programmers who think
that they can take their serialization code, write it once for a base
class (let's call it Archive), and then use it later against any class
that derives from Archive, including a Socket.
There are also many programmers who think it's ok to sprinkle explicit
deletes all through their code. So what?
We should tell them that they should not use serialization code
written for an "archive" class for a "socket". A good place to start
is to tell the author(s) of Boost::asio, for example. All novice
programmers who look up to Boost programmers might then take note.
That said, it is not clear to me how you would define where one 1MB
message begins and the other ends.
I'm not following you here. A message begins wherever the application
wants it to begin. It isn't until the application actually has a
complete message in its hands, that it invokes the deserialization
code on the body of the message.
So I guess that is your solution - Do not deserialize from an
archive. Or, if you do, do not do it the way you might do it from a
locally stored database, for example?
-Le Chaud Lapin-
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]