Re: Preventing Denial of Service Attack In IPC Serialization
On May 30, 8:28 am, "Sergey P. Derevyago" <non-exist...@iobox.com>
wrote:
Le Chaud Lapin wrote:
My gut feeling is that I will eventually discover that no solution
feels right, but thought I would ask before giving up.
IMHO you have to use some kind of digital signature. Corrupted
sequences will
also be filtered out.
We have no problems with our secure links, which we already have.
With such links, there is nothing that a perpetrator can do to alter
or inject bogus packets into the communication stream to trick the
recipient of the packet in doing a massive new [], because the
security mechanisms, which includes digital signatures, will cause
packet to be dropped.
The problem is when the link is insecure. It ruins the entire
serialization framework. Note that ruin happens not just for strings,
but for any situation where there is a vector of elements, and the
source of an object is about to convey to the target that size of that
vector before serializing the individual elements of that vector.
Note again that this is a framework here, not a specific application,
so I cannot, for example, in the context of each serializable class,
specific an arbitrary limit on the number of elements involved,
because it would be, well...arbitrary. This is true especially if the
class contains a vector template, as it would not be known the size of
each element in the vector, so even if some arbitrary limit were set
for the size of the array, say 65,536, if each element of the array is
an object with multiple members, it is conceivable that one of those
members would be an array itself. This problem presents itself
recursively, so that, if N is limit on number of elements allowed to
be serialized for vector V, then recursively L levels, there would be
an exponential explosion in memory space required for new[] against a
Foo vector[N], equal to N^L, so that even for L=4 and N=65,536, N^L is
2^64, and we're back where we started.
But aside from the details, it should be intuitively apparent that
trying to put these artificial limits ruins the regularity of the
entire model, which again, is a framework and not a specific
application. As we all know, arbitrariness is a red-flag in good
design principles.
Consider defining the serialization function for a List<>:
Socket s;
List<Foo> l;
s << l;
One would not be able to specify a limit on the count of s without
knowing how much space Foo will take up. Big Foo, small limit. Small
Foo, large limit. Foo itself could contain members that contain
List<>, and so on, recursively.
The more I think about this problem, the more I am beginning to
believe that it is better to leave the classes themselves alone and
focus on the memory management itself. At least the regularity would
be preserved.
In that case, there are two possible "solutions", one that will not
work, the other that might:
The solution of putting a limit on the "archive" object (Socket in
this case) won't work because that will be meaningless for a long-
duration application that was meant to acquire and release terabytes
of memory throughout its natural life.
That leaves memory allocation against the thread itself. At any given
instant, on a server machine with 4GB of ram and 500 client connects,
if one server thread is hogging 2.5GB for itself, there is probably a
breach in progress. In that case, the memory allocation should fail
with an exception, the server thread will hard-abort, the evil client
connection will be broken, and the only entity unhappy at that point
will be the evil client.
Unfortunately, memory allocation quotas on on most OS's, if I am not
mistaken, are applied on a per-process, not per-thread, basis.
-Le Chaud Lapin-
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]