[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [tor-bugs] #2334 [Torouter]: Torouter breaks with large cached-descriptors[.new] files
#2334: Torouter breaks with large cached-descriptors[.new] files
----------------------+-----------------------------------------------------
Reporter: karsten | Owner: ioerror
Type: defect | Status: new
Priority: blocker | Milestone:
Component: Torouter | Version: Tor: 0.2.1.26
Keywords: | Parent:
----------------------+-----------------------------------------------------
Comment(by nickm):
It would be neat to have a feature that says, "Don't use more than X bytes
of disk space."
Do we already store last-time-mentioned information for descriptors? If
not, we could maybe start, and implement a "remember as many as we have
room for, from most recently mentioned to least recently mentioned"
policy.
The problem with eliminating the .tmp file is that the regular cached-
descriptors file is pulled into RAM via mmap, and that's where the older
descriptors live in RAM. If we started replacing cached-descriptors, it
wouldn't actually get removed from the disk until we closed the file...
and if we closed the file, we wouldn't have the older descriptors in RAM
any more. So if we want to avoid using the .tmp file, we'll need to be
able to fit all of the descriptors into RAM as we rebuild cached-
descriptors.
OTOH, we might be able to save disk space while rebuilding if we started
by deleting the cached-descriptors.new file: all of _those_ descriptors
are in heap memory.
Also, if we felt particularly nutty, we could split cached-descriptors
into a few separate files (say, cached-descs/0 .. cached-descs/F) so that
rebuilding any particular file wouldn't need much temporary space.
---
A danger to consider: flash memory degrades more frequently with a lot of
writes, so we do not want to rebuild more often then necessary.
---
Another way to approach this is looking at our current logic for
rebuilding the store. If it is big enough (over 64K), we rebuild it
whenever the journal length is more than half the store length, OR the
number of bytes we know have dropped from the store is at least half the
store length.
First off, I don't know if the "bytes dropped" count is accurate, and if
it includes bytes dropped from the journal. I think the answer is "yes"
on both counts, but if it isn't, we should fix that.
Assuming that the totals are accurate, we might do well to have the logic
take into account our maximum disk space.
--
Ticket URL: <https://trac.torproject.org/projects/tor/ticket/2334#comment:4>
Tor Bug Tracker & Wiki <https://trac.torproject.org/>
The Tor Project: anonymity online
_______________________________________________
tor-bugs mailing list
tor-bugs@xxxxxxxxxxxxxxxxxxxx
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-bugs