[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: On state and sorting.
Adam D. Moss wrote:
> Steve Baker wrote:
>>"Adam D. Moss" wrote:
>>>even *redundant* changes
>>>are very expensive, like binding the same texture twice
>>>in a row or glEnable()ing GL_TEXTURE_2D when it's
>>>already enabled. Ugh.
>>Yes - it's frighteningly bad.
>>It's a hard call for an OpenGL implementor. If he
>>checks for redundant state changes then programs that
>>are careful to avoid them actually run more slowly.
> I understand and even support this approach in an API. I
> really wanted to expand upon something I read in an OpenGL FAQ
> (paraphrased: "your opengl implementation will probably
> optimize-away redundant state change, but you should try
> to avoid such changes anyway because they've never good") as
> a word of warning.
> I've also been looking into the problem of context grouping
> (rather than sorting). It's really an interesting problem,
> and I'm amazed that there seems to be no online literature
> that I can find which documents algorithms for grouping
> identical values while disregarding the relative positions
> of these groups. I can hash the values which will coarsely
> group them (O(1) per insertion), but then I'd still need to sort
> the bucket-chains (which will likely be shorter so the 'n'
> will be lower which will be important if I'm using an exponential
> O() function to sort them these buckets, but I may as
> well just use a O(n) bucket-sort to begin with and in either
> case I'm still back to the position where I'm using an
> algorithm which does grouping as a side-effect of sorting,
> rather than just getting the grouping effect which logically
> surely has to be a cheaper effect).
> Bleh, I say.
> To re-state the problem, in context-sorting (texture ID
> sorting, or whatever) the numerical position of a group
> of equivilent contexts is unimportant, but their clumping
> together is.
> So, I'd like an algorithm that would know that this is
> a good and finished state:
> whereas an actual sorting algorithm would spend extra
> effort re-arranging this already-good ordering to:
> Well, I'm beginning to intuitively suspect that grouping
> of an arbitrarily-sized range /has/ to be gotten as
> a side-effect of sorting, or the algorithm won't know
> how to find the group to put the value in, if you see
> what I mean. Unless someone has some great ideas I'll
> stick with actual sorting. But it IS an interesting
> problem. Or maybe it isn't!
That looks like hard drive defragmentation. Maybe a look at the code of
hard drive defragmenters would help.