[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

[pygame] Re: I'm getting BSOD while using PyGame.



Thanks for all the replies. This is a kind of long response, trying to
debug everything. ( By accident the email replied to claudio, so I
don't think it made it to the list. If it's a duplicate, sorry. )

On Nov 15, 2007 2:07 AM, Claudio <clau1san@xxxxxxxxxxxx> wrote:
> To me the map draw code is ok, by I spoted two
> problems:
>
> 1. The diplay surface is created without explicit
> flags, but later the code to update the display is:
> self.screen.flip()
> wich asumes you have a double bufered surface, wich
> itself asumes HWSURFACE

The documentation sounded like if the surface is not using the flags
HWSURFACE, DOUBLEBUF, that it will call a regular .update() , but I'm
not sure if it's saying that.

Either way, I tested with explicit flags set: (1) SWSURFACE +
.update() , and (2) HWSURFACE | DOUBLEBUF + .flip()

I tested (1) and (2) for a crash with .render(), .render_test(), and
.render_test2().

.render_test(), and .render_test2() always works in mode (1) or (2).

.render() always crashes in mode (1) and (2)

Looking at the code, I don't see anything obvious that would BSOD.
.render_test2() uses the same src_rect that .render() uses. .render()
appears to work ( blits visually are right ) for about 10seconds, then
it does a BSOD. Which seems strange that it doesn't just BSOD on the
first frame, since every frame should be looping with the same values
as the previous loop.

# snippet of file: Map.py
    def render_test(self):
        """test: works. without loop because .render() is crashing"""
        dest_rect = pygame.Rect(0,0,0,0) #dest rect()s should ignore w,h values
        self.screen.blit( self.tileset, dest_rect )

    def render_test2(self):
        """test: works. adding the src rect"""
        dest_rect = pygame.Rect(0,0,0,0) #dest rect()s should ignore w,h values

        # just to make sure: using exact src rect as .render()
        src_rect = pygame.Rect( 1*self.tile_w, 0,
                    self.tile_w, self.tile_h )

        self.screen.blit( self.tileset, dest_rect, src_rect )

    def render(self):
        """crashes BSOD after about 10seconds.
        renders map to screen"""
        for y in range( 1, self.tiles_y -1 ):
            for x in range( 1, self.tiles_x -1 ):

                dest_rect = pygame.Rect( x*self.tile_w, y*self.tile_h,
                    self.tile_w, self.tile_h )

                src_rect = pygame.Rect( 1*self.tile_w, 0,
                    self.tile_w, self.tile_h )

                self.screen.blit( self.tileset, dest_rect, src_rect )

>
> You can try:
>   a. be explicit
>         self.screen = pygame.display.set_mode((
> self.width, self.height ),HWSURFACE |DOUBLEBUF)
>      but be warned that the flags are 'sugestions' to
> pygame. It can return a SWSURFACE.
>
>   b. go safer, at least for testing purposes:
>         self.screen = pygame.display.set_mode((
> self.width, self.height ),SWSURFACE)
>      and update with
>         self.screen.update()
>      not with flip

Used this code for the above and below tests.

>
> 2. the other problem can be the capping
>    Lets see:
>
>    In fps.py
>    self.max_fps = 60
>    ...
>       return self.clock.tick( self.max_fps )
>
>    so you tell pygame : 'try to call me 60 times in a
> second'
>
>    In zombiehunt.py
>         self.bLimitCPU = True
>         self.limitCPU_time = 15
>    ...
>             if self.bLimitCPU: pygame.time.wait(
> self.limitCPU_time )
>
>    so in each frame pygame is tell to give 15ms to
> others apps, that is
>    60*15 = 900ms for each second.
>    Then the game must run using 1000-900=100ms of each
> second. A 10% CPU load as seen by task manager. This
> is way too low.

Way to low for actual game play? Yeah, I agree. The delay was for when
I was programming on my laptop, so I could save battery power.

> To further murk the waters, when pygame is required
> to mantain a relatively high framerate, internally
> goes to a do nothing loop that gives near 100% cpu
> usage. The treshold fps was something near 40 fps.
>
> So, certainly limitCPU_time=15 is way too high. Max
> 5 seems to be standart, maybe lower can work.

I see from some performance testings, that limitCPU_time = 15
definately lowers CPU usage but can reduce FPS below fps_max. I tried
5 and at least on this CPU, the FPS does not go below my fps_max
anymore. I'll stick with 5 for now and mess with it more later.

The reason I added limitCPU at first was so I could also test/use this
on my laptop without requiring it to use battery as fast. ( Something
that was useful to me in previous c++ SDL programs ) But during these
tests I found I could retain max FPS, and still lower CPU usage. using .wait()

Is there a better way to say max_fps = 60; but also call .wait() if
you don't need all that CPU to retain the max_fps ? ( Would you have to
predict and dynamically adjust .wait() call values? Is there a better way? )
It's not important, just curious.

> Maybe you need to lower the fps cap to avoid the
> internal 100% CPU usage.

I thought that if I used clock.tick( 60 ) ; that if it's drawn 60fps
already, that it would reduce CPU usage, like calling .wait(). It
doesn't seem to from my tests.

Note: In the following tests I made sure nothing was running in the
background to skew the results, and would take the average after about
10seconds. Also, this is a dual-processor CPU, so 50% actually means
100% on one processor.

It my tests I got:

fps: true; max_fps = 60
cpu: false; limit=5
        fps ~= 62
        cpu ~= 47%

then I toggled limitCPU on, it saves CPU, retains FPS. I didn't expect that.
fps: true; max_fps = 60
cpu: true; limit=5
        fps ~= 62
        cpu ~= 27%

( I know that it could lose FPS based on different hardware. The point
is that only using max_fps isn't saving CPU usage when it doesn't need
it. ( Which I thought it would.) )

Then, I found a wierd result. Setting a max_fps, verses no max,
increases CPU usage by quite a bit. ( While limitCPU is kept constant
as off for both tests. ) Because with both off:
fps: false; max_fps = 60
cpu: false; limit=5
        fps ~= 220
        cpu ~= 30%

( Compare that to the above results of: )
fps: true;
cpu: false;
        fps ~= 62
        cpu ~= 47%

So I get increased CPU usage, for a significantly lower FPS, when max_fps is
set. That doesn't seem right.

So .tick() doesn't seem to be saving CPU time ( ie: .wait() ), it just
prevents extra .update()s / .flip()s / renders ?

> Note that when in the revised code you blit the
> tileset the workload is lower.

I'm not sure what revised code you're talking about? Enabling
HWSURFACE | DOUBLEBUF ; then .flip() and reducing / eliminating
limitCPU_time ? Or something else?

thanks,
-- 
Jake