[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Sv: Sv: Memory allocators

>> Perhaps it would be a good idea to find out exactly who does what. I'm
>> finished with the ppDelPtr class (documenting it now).
>> You are going to do the hashtable and the URLs? Anything else that needs
>> doing?
>How much time do you have? Mine is *very* limited right now.
I don't know exactly how much time I have. I haven't got any plans for the
weekend. But of course, I don't think I would enjoy doing nothing but
program all weekend. I have a fair amount of time, I'd say.

>The URL stuff is implemented and the API functions are already using it.
API functions? Didn't you mean most of the API functions are already using

>Now the internal code has to be modified to do the same (that´s
>primarily the SearchXXX () methods in the new directory container (the
>hash) plus some methods in ppf_TGlobalData - oh, and the specialized API
>code (ppf_Open.cpp, ppf_OpenDir.cpp etc).
Technically, the method that finds an entry in the container is an accessor
method, and therefore should be prefixed with Get instead of Search. Of
course, Search does make it more clear that some kind of processing is going
on to come up with the result. I'd personally go for GetXXX().

>I wanted to do at least the hash myself, because AFAIK you don´t have
>the same level of "insight" of the various dependencies with that yet
>(PakFile reading/writing (writing needs to be changed anyway, at least
>partly) and the overall dir management).
I get a hint that you have considered the possibility that I migth get
offended by your saying that I don't have the same level of insight into the
various dependencies as you do (or I'm just way off and talking nonsense to
myself). I certainly would (consider the possibility, that is) if I said
something like that to someone. Not that it is an offensive remark, and it
is certainly true, but some people just are that way. I'm not that easy to
offend, though (just thought I'd clear it up. After having run other
projects (non-programming related), I know how one has to worry about these
kinds of things, as its hard to really get to know people over the net)

Anyways, ideally the container itself should only know how to serialize
itself, and how to serialize the components of which it is made up of (or
how to ask them to serialize themselves). Ie, this should all be ideally
done without any dependencies of any kind (ie, keep things on a need-to-know

I suspect that the dependency that cannot be removed is where in the
pak-file other data is stored (like the actual files and directories the
directory is made up of). Completely off?

Anyways, my point is that I think the directory container, and everything
else that goes in the harddisk, should have a method that returns the amount
of bytes the serialized data will take up on the harddisk. The result is
that the complete pak-file can be written to harddisk in one consequetive
stream of data, without going back to write in position data that otherwise
could not be obtained before writing most data to disk.

The result is that all information is available when it is needed. The
directory container can remove all dependencies, as the information it needs
can be passed in as an argument (an array of the position of stuff it needs
to know the position of), or perhaps in some other way (each entry in the
container itself figures out where everything goes, but I suspect that's a
little tricky and completely unnessecary).

I have taken it that the way objects are written to disk, except for very
simple objects that can simply be dumped whole-sale, are by asking them to
"stream yourself to this destination". That's IMHO the ideologically (sp?)
best solution, and the only solution that keeps everything on need-to-know
basis and keeps changes in one place from cascading through code.

What do you think of this?

>But if you plan to stay with
>PFile you´ll have to gain hat insight sooner or later anyway ;)
I certainly do plan to stay with PFile.

> And I´m
>really short on time. And the dependencies have to be cleaned up anyway
>(there´s too much tree-specific stuff outside of the tree code) ;)
Well, the tree-specific code we can just delete, since there isn't any tree

>If you want I can send you my current code plus a rundown of
>what-does-what and what-to-care-about when I´m back home.
"I want", or, to be it a little more civilized, "I'd appreciate that, thank
you" :)

>PFile´s C API has to be usable both from C and C++ for example.

// the code ...

namespace pp
#include CHeader.h

There's also the possibility of

#ifdef c_plus_plus
// we can shorten it to something like PP_BEG and PP_END
#define PP_NAMESPACE_BEGIN namespace pp {

// the code...

We don't use any vertical screen space in any of the above cases since we
aren't placing braces directly in the header file. I like the second
solution best.

>In any case we have the "pp" prefix whih makes the namespace for API
>stuff pretty unneccessary.
Well, actually, putting everything in a namespace removes the need for the
"pp" prefix, doing alot for ease of use. so "pp" would just be replaced by
"pp::" in cases where the "pp" namespace have not been "using namespace

We get it all: maximum usability in several senses and still no chance of

>The namespace for internals on the other hand is useful because it
>allows us to e.g. override operator new without interfering with some
>custom operator new defined by the user.
I think it'd be annoying and unnessecarily confusing to add "pp::" to some
things, and not to others, solely based on what is internal and what is not.
If we change something from internal to external, or vice versa, the change
would cascade through code.

Both of the solutions above can be applied incrementally without breaking
any code (simple as "using namespace pp")...

This is the same solution used by the standard library; put C++ and C stuff
in a namespace when compiling C++, and put C stuff in the global namespace
(there only one there is in that case) when compiling C.