[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] Native PyGame method for automatically scaling inputs to a surface resolution?



On Thu, Sep 29, 2011 at 5:50 PM, Greg Ewing <greg.ewing@xxxxxxxxxxxxxxxx> wrote:
Christopher Night wrote:

I actually think the correct behavior is to truncate. This makes sense if you consider the pixel "at" (0,0) to actually occupy a 1x1 rectangle, extending from (0,0) to (1,1). So the point (0.7, 0.7) should actually be considered to be within the pixel at (0,0).

If your intention is to draw a 1x1 rectangle at some location
on the screen, the correct approach would be to calculate the
transformed coordinates of all four sides of the rectangle,
round them to ints, and then fill the resulting rect.

Okay, thanks for the response. I understand that you're saying that's correct, but I don't understand why. If I have a 100x100 pixel window, and I want to put a dot at a position (x,x), it seems to me like the dot should appear in the window if 0 <= x < 100. You're saying it should appear in the window if -0.5 <= x < 99.5. So if I request to put a point at (-0.4, -0.4), it will appear in the window, but if I put one at (99.6, 99.6), it won't. I disagree that this is the correct behavior. Intuitively, the point (99.6, 99.6) should be within a 100x100 canvas.

I realize that it's a matter of preference, and either way would be logically consistent, so it's just a matter of which is more intuitive and comfortable.
 
In my experience, rounding is almost always the right thing
to do, and if it seems not to be, then you're not thinking
about the problem the right way.

Well, that's certainly not true. Rounding is often the correct way to get from a float to an int, but truncation is correct at times too. I can provide examples if you want. But even so, I think they should make this decision based on what's the right answer for this problem, not what's the right answer in a general sense.

-Christopher