[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: SV: Progress Update
Bjarke Hammersholt Roune wrote:
> > Note that Unix ran for more than 20 years without trees for directories.
> > :-)
> Yes, but I don't think that should be seen as an advantage.
Of course, I'm not seeing the 20 years as an advantage, but the reality
is that many big servers uses slow and bloated commercial versions of
Unix that use the badly-performing ufs filesystem, like Solaris, and
they fill the job with complete satisfaction.
Improving the filesystem performance would sure be welcome, but since
the important bottlenecks are elsewhere, they're not very focused on it,
but rather on what is important. Compared to the network link, the
filesystem is instantaneous, so they are instead focusing on improving
network performance, with things like sendfile() and others.
> > I'm not saying that you shouldn't use your HashTable for directories.
> > I'm just saying that putting so much effort into something so easily
> > fixable later on (I'm assuming good modularity here)
> There are atleast a couple of issues I couldn't have forseen if I had just
> implemented HashTable as a linked list. Its not just as simple as "find this
> entry for me". HashTable also knows how to serialize itself, its members,
> and its DynHashTable alteration technique surely wouldn't have found its way
> into a linked list.
Very few things are not serializable. Sometimes, the serialization code
can be awful, but it is doable most of the time. I assume the
serialization interface is general (something akind to
WriteToStream(Stream& out) and InitFromStream(Stream& in), just a
guess), so there is nothing to planify there, you have no choice as the
interface is probably already laid out. Serializing a linked list is
quite easy with most stream architectures (I do not know your, maybe
this is not the case with it).
The *algorithms* of the DynHashTable alteration technique wouldn't have
found its way, but that's exactly the point, *not* having to do that. A
linked list is already "dynamic", so there would be no need for a
> If I had used a looong time to sit down and plan how I was going to do this,
> I migth have been able to foresee the problems and design my interface
> accordingly, even if I was only fooling around with a linked list. But then
> I suspect that this would have taken me longer than implementing the
> HashTable in the first place.
Hey, a "hashtable-like container" has normally three methods (get, set
and remove methods), and serialization normally has two (to stream and
from stream). Those are quite generic things that have been thought of
over and over. This gives you a nice "black box" interface that can be
backed by a sophisticated hash table algorithm or a very dumb resizable
arrays or linked list algorithm without any effect on user code (apart
from performance, of course).
I estimate implementing this with a cheap linked list to be around 15
minutes, and with a hashtable maybe 20-30 minutes (the crux in the
hashtables is in the hashing algorithm, I'd try a few different ones for
> > is *not* helping PenguinPlay as a project. It isn't the first time that
> > PenguinPlay "dies out", maybe this is a reason.
> No. The reason PenguinPlay died out was that there was alot of talk, but no
> action. That is not the current situation: we have alot of talk, but we
> *also* have action. That's how it IMHO works best.
We need *lot* of action. Not simple action. :-)
> In other words, it doesn't slow down other parts of PenguinPlay, and I
> actually think PenguinFile is well ahead.
Where is PenguinDraw, PenguinSound, PenguinEverythingElse?
> > and fix it. Heck, it could prove to be totally sufficient and I'd just
> > spare me the whole trouble!
> We knew that wouldn't be the case.
You'd KNOW only if you build a game using PenguinFile, wielded a
profiler at it and looked at actual profiler output. Now you actually
THINK it wouldn't be sufficient, which is a *lot* different than
Of *course*, benchmarking my simple system against PenguinFile would
have PenguinFile win, that is obvious, because such a benchmark would be
aimed toward measuring file system accesses.
> > Yes, but I gave you the perennial example of a system that "did it
> > properly" and went straight to the garbage dump.
> It obviously must have lacked something that it didn't do properly. Maybe it
> didn't do enough, maybe it was too hard to use, or maybe it just wasn't
> marketed as well, I have no clue (really), but it must have been inferior in
> some way.
The way in which that they were inferior is that doing it "properly" was
complicated. Complicated in the way they they required big machines to
run, or specially-made machines (the Lisp Machines had garbage
collecting done in hardware), and they were hard to port to other
machines because of this complexity.
The other choice was Unix, where you could port it and the C compiler
needed to build it to a new machine within a week. Guess which one
spread the most.
For PenguinPlay, the danger isn't in requiring powerful machines or
being a problem to port (because you're doing the ports yourself), but
it is in being overtaken by other projects.
I think PenguinPlay started before ClanLib and SDL (as far as I
remember), but as "properly done" PenguinPlay might be when it is
finished, it will be IRRELEVANT, as one (or the two) of ClanLib or SDL
(or others) will become widely available on Linux distributions and
Windows, as DirectX is now. Nobody will even be interested in having it
> > Unix is mostly fixable, *that* is what took it from the 70's right up to
> > now. The old filesystem was too slow, they improved on it without
> > changes to the applications, and this is about to happen again (with
> > ext3/reiserfs/xfs). Note that every spots where we have problems is
> > mostly because of compatibility/encapsulation issues, like going from
> > /etc/passwd to /etc/shadow to LDAP authentication for example.
> And that is my point. It is sometimes very hard to predict exactly what you
> interface must be able to do, before you need it to do that as an extension
> of your implementation.
So then, don't fret with it, leave your crystal ball aside and just do
what you can for now. We'll see later.
> > Good, but think about *how* you'll get there. Think about how to make a
> > "perfect computer virus", such as Unix and C were.
> Well, its quite simply that PenguinFile offers alot of benefits (which will
> improve with time), almost without any drawbacks. All you have to do is
> replace the standrad fopen() with the PFile fopen(), fwrite() with the PFIle
> fwrite() and so on. Very simple. Only real alteration is the mount/unmount
> process, but that can, in some cases, be handeld by ONE line of code (ok, a
> little more perhaps, but not much).
> Something so simple gives you a whole range of benefits. If you are creating
> the right kind of game, PFile will be a given. In many other cases too. I
> think even programs that are not games, but use alot of static data, could
> benefit from PFile.
No doubt about that. Why is Quake3 not using PenguinFile? Or Myth 2?
There is no such facility in SDL, they could have used PenguinFile, no?
I guess that if you'd look at Quake3 profiling information, you'd see
those file handling functions about at 8/10 of the list of CPU time
usage, with 0.0% or 0.01% of usage. What you'd see at the top would be
much more interesting to fix.
Heck, I think they're using regular ZIP files for their archives.
*That's* a good idea, archivers are readily available, it's already
portable, already debugged, just grab the InfoZIP distribution and
> > The idea isn't to get a perfect design, it is to get the best possible
> > design that will *succeed*. Such are engineering compromises.
> I doubt alot of people would use PFile if it actually degraded their game's
How can you possibly *degrade* the performance over
fopen/fread/fwrite/fclose? And a little overhead in a little-frequented
code path won't make anybody lose sleep.