Re: Preventing Denial of Service Attack In IPC Serialization
On Jun 25, 3:58 am, jlind...@hotmail.com wrote:
Why do you think that everyone must use a 1Mb buffer?
You are the one who picked 1MB.
Why would "I" pick a maximum buffer size for you?
In your buffer scheme, a size has to be picked.
Do you understand that the maximum buffer size can be varied, even at
Yes, and as I and others have stated, once you start "picking the
maximum buffer sizes", you still have an issue. If the sender
specifies the buffer size, it could force the receiver to allocate a
Do you understand that this has nothing to do with C++ serialization
I disagree with this. As I mentioned in one of the post, the problem
where the sender sends a large unsigned int to trick the receiver to
invoke operator new() on that large unsigned int is the most obvious
of several problems.
// At sender
s << SET_BUFFER_SIZE
s << ~0UL;
// At receiver
unsigned long int buffer_size;
s >> buffer_size
char *p = new char[buffer_size];
You scheme called for allocation of a buffer from which serialization
would be effected against. You said that the buffer size should be
limited. I asked what was the limit, and you said that the limit
should be specified by the "application programmer". The problem with
your scheme, no matter who choses this limit, is that it will have to
be "big enough". You picked 1MB as an example. That is too much for
the PDA's on which our code will run. It should be evident that any
value chosen will be inappropriate...unless...
the values choses are interwoven with the serialization code itself.
Perhaps you could enlighten the rest of us as to your solution, since
it "works well enough" .
My solution involves augmenting the "Archives" of the serialization
framework with a stack<unsigned long int>:
stack<unsigned long int> ;
unsigned long int push_limit ();
unsigned long int pop_limit ();
The premise of my solution is that, in many cases, the serialized
object itself knows best how big it would be under "reasonable"
circumstances. For example, I have object that consists of four
smaller objects: 2 64-bit structs, and 2 list<string>;
DA rda; // 64-bits;
DA cda; // 64-bits;
TA rta; // list<string>;
TA cta; // list<string>;
A reasonable size of a Foo is 8 bytes, plus 8 bytes, plus...whatever,
so long it is not huge. The 16-bytes is easy calculated. The
"whatever, so long as it is not huge" is the critical part. I know
the nature of a Foo, and I know that, normally, list<string> should be
allowed to be "as big as it needs to be."
However, in this particular context, unlimited is not appropriate. I
know that each of these lists should not consume much more than 2KB
each. So 2KB * 2 = 4KB, plus the 16 bytes, ...but since we are
estimating anyway, 4KB should be sufficient for the entire Foo
In that case, just before a Foo is about to serialize itself from a
Socket, it declares its own limit:
Socket & operator >> (Socket &s, Foo &foo)
s >> foo.rda;
s >> foo.cda;
s >> foo.rta;
s >> foo.cta;
The Foo structure will build itself by serializing from the socket,
decimating the limit that it specified in push_limit() piece by
piece. If the limit is completely depleted before Foo is fully
constructed, an exception is thrown. If no exception is thrown, then
the Foo successfully read from the socket. At this point, he next top-
of-stack is decimated by the amount that was subtracted from the
current top of stack, and the remnant that is the current top of stack
is popped. If it ever occurs that there that the stack is empty, that
brings us back to the original situation, the problem, to say that
there is no limit, which is a legitimate case in circumstances that
can be inferred.
There are some points to note about this scheme:
1. The serialization framework itself determines limits because only
it knows what the limits should be.
2. The "application programmer" is most relieved of the burden/tedium
of choosing "maximum buffer sizes"
3. There are no "maximum buffers". If anything, there is only the,
say, 1500-byte Ethernet payload.
#3 is important, especially on a PDA with only 64MB of RAM.
However, there are some flaws with this scheme that might be apparent
to anyone who has ever developed a large-scale serialization
framework. Naturally, it is optimal in many cases that an object be
serialized from an archive by construction only, not by assign-after-
construct. Some objects have heavy-weight default-construction, and
if one uses this scheme to deserialize say, a 1-million-element
list<Heavyweight_Class_With_Massive_Constructor>, the performance
penalty will be interesting indeed.
There are other problems, which I do not care to mention, but the
solution works "well enough".
-Le Chaud Lapin-
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]