[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: Wiki about usage of vservers still up to date? / What are descriptors?



On Mon, Apr 24, 2006 at 06:50:28PM +0200, Eugen Leitl wrote:
> I see no problem with a customer running Tor as long as they
> pay for the traffic. If the CPU load is considerable, it may
> ask for a beefier host, which would put it into a slightly more
> expensive class (say, 10$ instead of 5$). Of course you can
> get a flat rodet physical server for some 30 EUR/month, but these
> are definitely scams.

Everyone knows that CPU load, memory usage, and network throughput are
naturally limited on a per-host basis.  After all, a computer only has
so much of these resources to allocate.

However, our concern is program-specific restrictions on number of file
descriptors, number of network sockets, number of locks, number of file
descriptors, etc.  These resources too are provisioned, and server
operators sometimes do not know what to expect their actual needs to be.
Since these limitations are really meant to ensure that processes play
nicely with operating systems, rather than to ensure that processes do
not hog critical resources, it is easy to see why people might view
these limitations as "artificial."

I think that the only long-term solution to this is to encourage the
development and use of tools that monitor use of these resources, not
just CPU, memory, and bandwidth, on a per-process level.  Perhaps then
people will have a better idea of what their limitations actually are,
and how to request hosting more suitable for their individual
requirements.

Geoff

Attachment: signature.asc
Description: Digital signature