[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: huge pages, was where are the exit nodes gone?

     On Tue, 13 Apr 2010 19:10:37 +0200 Arjan
<n6bc23cpcduw@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>Scott Bennett wrote:
>>      BTW, I know that there are *lots* of tor relays running on LINUX
>> systems whose operators are subscribed to this list.  Don't leave Olaf and
>> me here swinging in the breeze.  Please jump in with your LINUX expertise
>> and straighten us out.
>I'm not an expert, but I managed to perform some google searches.
>>From that website:
>libhugetlbfs is a library which provides easy access to huge pages of
>memory. It is a wrapper for the hugetlbfs file system. Applications can
>use huge pages to fulfill malloc() requests without being recompiled by
>using LD_PRELOAD.

     [Aside to Olaf:  oh.  So forcing the use of OpenBSD's malloc() might
prevent the libhugetlbfs stuff from ever knowing that it was supposed to
do something. :-(  I wonder how hard it would be to fix the malloc() in
libhugetlbfs, which is most likely derived from the buggy LINUX version.
Does libhugetlbfs come as source code?  Or is the use of LD_PRELOAD simply
causing LINUX's libc to appear ahead of the OpenBSD version, in which case
forcing reordering of the libraries might work?  --SB]
>Someone is working on transparent hugepage support:

     I've now had time to get through that entire thread.  I found it
kind of frustrating reading at times.  It seemed to me that in one of
the very first few messages, the author described how they had long
since shot themselves in the feet (i.e., by rejecting the algorithm of
Navarro et al. (2002), which had already been demonstrated to work on an early
FreeBSD 5 system modified by Navarro's team) on emotional grounds (i.e.,
"we didn't feel its [Navarro's method's] heuristics were right").  They
then spent the rest of the thread squabbling over the goals and
individual techniques of Navarro et al. that they had reinvented, while
not admitting to themselves that that was what they had done, and over
the obstacles they were running into because of the parts that they had 
*not* adopted (yet, at least).  At times, it appeared that the author of
the fairly large patch that implemented the improvements to the hugepage
system was arguing directly from Navarro et al. (2002) with one of the
other participants.  Shades of Micro$lop's methods and not-invented-here
attitude.  What a bummer to see LINUX developers thinking in such denial!
So if the guy who had written that early kernel patch for LINUX (the thread
was a year and a half ago) has persisted in his implementation, he may have
the bugs out of it by now, but in the long run, his design (or lack thereof)
should provide something that provides significant improvement for some
large processes on LINUX, but the way it is done won't be at all pretty.
     Unlike the method of Navarro et al., which that team had actually
done not on an x86-type of system, which is the only type so far supported
for superpages in FreeBSD 7 (not sure about 8.0), but on an alpha machine,
using the four page sizes offered by the hardware, the method implemented
by the OP of the thread used a "hugepage" size (2 MB) that is not supported
by the hardware, except for pages in instruction (text) segments.  I didn't
see anywhere in the thread an explanation of how their software pages are
made to work with the hardware, but I would imagine they must combine two
software hugepages to make a single 4 MB page as far as the address
translation circuitry is concerned.  It left me wondering much of the time
which processor architecture they were working with, though it eventually
became clear that they were indeed talking about x86 processors.  The
others in the thread also voiced opinions that the method would prove to
be not easily portable to other hardware architectures, unlike the Navarro
     Navarro et al. (2002) found that their array of test applications did
not all benefit at all superpage sizes.  Which superpage size crossed the
threshhold into reduced TLB thrashing varied from application to application.
Some benefited after the first promotion from 8 KB base pages to 64 KB 
superpages.  Others benefited after the further promotion to 512 KB
superpages.  Still others' performance characteristics did not improve much
until after the third promotion to 4 MB superpages.  Which size causes the
big improvement for an application depends almost entirely upon the memory
access patterns of that application.  It remains to be seen whether an
application that doesn't speed up in FreeBSD tests until after the application
has been promoted to the 4 MB superpages will speed up in LINUX's 2 MB
     I'm still tired.  I think I might take a short nap again, so I might
not post replies to anyone's followups on this for a few hours.  (Yawn...)

                                  Scott Bennett, Comm. ASMELG, CFIAG
* Internet:       bennett at cs.niu.edu                              *
* "A well regulated and disciplined militia, is at all times a good  *
* objection to the introduction of that bane of all free governments *
* -- a standing army."                                               *
*    -- Gov. John Hancock, New York Journal, 28 January 1790         *
To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxxx with
unsubscribe or-talk    in the body. http://archives.seul.org/or/talk/