On May 07.05.2011 20:26, Mark Ellzey wrote:
On Sat, May 07, 2011 at 01:07:40AM -0400, Nick Mathewson wrote:*lots of text here*One of the ways nginx deals with very large streams of data is to actually spool the data to file. By default this is turned off. But this may be a simple solution to all of these problems. The spooling is transparent to the user, but with libevent we can expose a little bit more information (since we're dealing with applications utilizing, instead of scripts or whatever being called static) about where the file is or something to the likes..
whilst this might be nice for flow-blown web services, this does not work for embedded systems that have no or very limited disk storage.
Also, this pattern disables effective stream handling, e.g. where you don't want to store and process, but do some operation like decompression and further post-processing in "real-time".
Instead, my suggestion is to process chunk-wise, and optionally provide a chunk-handler that streams to a file.
Cheers, Roman *********************************************************************** To unsubscribe, send an e-mail to majordomo@xxxxxxxxxxxxx with unsubscribe libevent-users in the body.