[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Memory leak?



On Tue, Jan 27, 2009 at 01:42:01PM +0000, Dominik Sandjaja wrote:
> I am using a local tor network (three nodes) for testing purposes. For
> simulating maximum load, I wrote a script, which initiated circuits from
> a fourth computer ("penetrator") and closed them as soon as they were
> built. This was done via TorCtl (Tor Control Protocol).
> 
> The nodes were fully loaded with the circuit building (as expected on a
> GBit network), but after a while, the middle node (and only that one)
> exits with an out of memory error.

Neat.

> Now I wonder whether this is the "normal" behavior or whether it is
> caused by a leak. Although I am pretty aggressively building and closing
> circuits, an oom shouldn't happen as all circuits are being closed
> immediately after being "built".
> 
> The problem is reliably reproducible with tor stable as well as -alpha.
> It only happens on the middle node, independent of the underlying
> hardware.

I assume you're doing this on Linux?

You might try it on, say, FreeBSD to compare. Its malloc is a lot
different than the one in glibc. In many cases, the memory fragments
such that even though we free it, the OS never sees it again. See also
http://archives.seul.org/or/dev/Jun-2008/msg00001.html

If you're good with valgrind, you could also check to see if it's
an actual memory leak. We don't know of any serious memory leaks in
0.2.1.11-alpha (or trunk).

> So, question still is: Is the oom accepted or an actual error?

Tor dies on out-of-memory. If it's lucky, it notices the problem and
kills itself. Else, the out-of-memory-killer eats it, or some other thing
makes it disappear. We don't try to survive out-of-memory, because in
many cases that's not a decision that the process gets to make.

--Roger