# Re: [pygame] Native PyGame method for automatically scaling inputs to a surface resolution?

```On Thu, 29 Sep 2011 18:18:48 -0400
Christopher Night <cosmologicon@xxxxxxxxx> wrote:

> If I have a 100x100 pixel window, and I want to put a dot at a
> position (x,x), it seems to me like the dot should appear in the
> window if 0 <= x < 100. You're saying it should appear in the window
> if -0.5 <= x < 99.5. So if I request to put a point at (-0.4, -0.4),
> it will appear in the window, but if I put one at (99.6, 99.6), it
> won't. I disagree that this is the correct behavior. Intuitively, the
> point (99.6, 99.6) should be within a 100x100 canvas.

I have a different take on this: if you have float rectangles, you are
effectively treating **your rectangle as part of your model, not part of
your representation** (see my previous mail). This means that a point,
which is a dimensionless entity, shouldn't be displayed regardless of
it's coordinates, given that your screen' pixels have a dimension (and
therefore are infinitely larger than a point.

I realise that this is absolutely counter-intuitive (you would be
obliged to draw points as circles or rectangles that scales to 1 px or
to internally convert the calls to draw pixels to calls to draw
rectangles), but I think that is the only mathematically correct
solution to the ambivalence.

in fact:

pixel(-0.4, -0.4) = 1-px-rect((-0.9, -0.9), (+0.1, +0.1)) =
rect-through-scaling-routine((-1, -1), (0, 0)) = no lit.

and

pixel(99.6, 99.6) = 1-px-rect((99.1, 99.1), (100.1, 100.1)) =
rect-through-scaling-routine((99, 99), (100, 100)) = lit.

[this assumes that the scaling routine - as proposed in a previous mail
- uses rounding, not truncate].

/mac

BTW: This is part of the reason why I think that rectangles should keep
to be int based / part of the representation logic.

```

• References: