[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Serialization et al

Bjarke Hammersholt Roune wrote:

>> Well, no. A core assumption behind that serialization scheme was "the data
>> structures should be able to serialize themselves" - and that's a good
>> thing.
>Well, yes, when it makes sense. It makes sense for HashTable to know how to
>serialize itself in a format that is ideally suited to itself, without
>caring too much about anything else.
>It doens't make so much sense for it to know how to serialize itself so that
>it meets the requirements of a multitude of different formats. That really
>has nothing to do with any of its main responsibilities.

Hmmm, I hadn't considered that. You're right. That would mean sacrificing a
little consistency (for Paks HashTable serializes itself while for other
formats the serialization is done by some external entity (Directory
derivatives)) for simpler code. Sounds good.

>I do, however, think that what should count the most is how easy the format
>is to handle programmatically, not manually. It is, afterall, a binary

Well, yes. Actually it should be a compromise of both ;)
A format that's very simple to write is fine, but quite problematic if you
can't tell how it actually looks like. Imagine some image manipulation
software say: "Well, the image files are written by this code, and that one
can read them again. At least for the cases we tested. You want to write a
viewer for them? Uh, you could reverse-engineer it..." ;)

>If you don't have too deep directory structures, I don't think its *that*
>bad compared to the old scheme. You'd probably know that better than I,
>though... ;)

Well, If there's only one directory level (only the Pak's root dir) both
formats are almost exactly the same. Once more dirs are added my format
makes it (much) easier to see what directory you're actually looking at,
but it adds some complexity because some links/crossreferences have to be
maintained. That's a pretty minor issue however.

>> It reminds me of purely functional LISP code - and that makes my mind jump
>> in loops when I try to read and understand it.
>Well, figuring out exactly how to best do this made my mind jump in loops
>too :)
>It's really hard to imagine how something like this will look in its
>serialized from, because we have two distinct datatypes where one gets
>expanded recursively and contains both more of its own kind, aswell as some
>of the other kind. Atleast, it was kind of hard for me.

Wait, <searching> Ok, here is an example of the LISP code I was thinking of:

(defun fak1 (n)
   (fak_iter 1 1 n)

(defun fak_iter (product counter n)
   (if (> counter n) product
       (fak_iter (* counter product) (1+ counter) n)

It calculates the faculty of n
Our Prof presented this as example of an "iterative" process (meaning that
a compiler can optimize it to an iterative process) ;)
The recursive version is:

(defun fak (n)
   (if (= n 1) 1
       (* n (fak (1- n)))

Just as little anecdote...

>It is my understanding that when a function calls itself recursively, it
>gets its code loaded to the instruction-stack (what's the proper name?)
>twice. Have I misunderstod this? Doesn't doing it this way save alot of
>pushing/popping on the stack?

Calling a function pushes the instruction pointer on the stack (usually
32bit) and some of the processor registers. Then space on the stack is
allocated for the function's local variables (by simply adjusting the stack
pointer). No code is pushed.
That amounts to about, say, 6-10 pushes on an X86 plus an add (or sub)
operation on the stack pointer.

>Anyway, I think this works quite well (even if those 10 lines might look a
>bit arcane the first few seconds), so instead of polishing off code that
>already works, I'd recommend getting on with more important matters.

Agreed. It was only a side note anyway ;)


Drive A: not responding...Formatting C: instead