[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] Python and Speed



On Wed, Apr 16, 2008 at 7:30 PM, Ian Mallett <geometrian@xxxxxxxxx> wrote:
Thus it falls to you as a developer to choose your implementation strategy wisely:
But again, this is treating the symptoms, not the problem...

I actually think the line of thinking I read in the comment above (thinking that I shouldn't have to optimize things, because the stupid language is slow) is in fact a counterproductive attitude in application developement, and misses the real point.

It would be nice of course if python ran much much faster, but it runs slow largely because it is designed to give you the flexibility to code complex things much easier. If you don't want that flexibility, then you need to "turn it off" with pyrex and extensions and all that kind of stuff. However sometimes that flexibility actually lets you code more efficient approaches to begin with. Ultimately all slowness is the programmers problem, not the tools. If a particular tool is the best to help you solve the problem, then it should be used. With python, coolness is always on, so it's cheap to use coolness. C++ was designed to make you not pay for anything you don't use, which means coolness is default off, which means it's really hard to use coolness.

...to get to brass tacks though, I've found that the majority of the real slowness in _anything_ I code is due to the approach I take, and much less so due to the code. For example, pairwise collision between a set of objects. If every object needs to be checked against every object, that's an n-squared problem. Get 1000 items, that's 1,000,000 collision checks. But lets say I do object partitioning, or bucketing or something where I maintain sortings of the objects in a way that lets me only check items against ones that are close to it, and I either get log(n) partitioning or maybe I get at most about 10 items per bucket (both very achieveable goals). Now it means I only do about 10,000 (10*1000) collision checks for the same real work being done.

So lets say that my python collision code takes 100 times as long as my c++ collision code - that means if I do the optimization in python, I can get the python code to go just as fast as the C code without the optimization. Not only that - lets say I decide I want to step stuff up to 10,000 items with pairwise collision - now it's 100,000,000 checks vs. like say 100,000 based on the approach - now python can actually be 10 times faster.

So now the issue becomes whats the cost of writing the more efficient approach in python code vs. writing the naive approach in c++ code. If you think you get enough programmer benefits from working in python to make those 2 costs equal, and the performance of either is good enough, python is the better choice. Not only that, once you've got good approaches written in python that are stable and you don't need the coolness/flexibility, it becomes much easier to port the stuff to C, or pyrex it or whatever makes it much, much faster.


On Thu, Apr 17, 2008 at 11:59 AM, Ian Mallett <geometrian@xxxxxxxxx> wrote:
[casey talked about complexity]
This is precisely the problem I have run into in one of my in-dev games--iterating over large arrays once per frame.  Actually, it is basically a collision detection algorithm--I have two arrays, both containing 3D points.  The points in one array must be tested with the points in the other to see how close they are.  If they are close enough, there is a collision.  Naturally, this means that for every point in one array, the other array must be iterated through and the 3D pythagorean theorem performed to each tested point. 

Sounds like your approach is O(N^2). If most points aren't close enough to do the collision, partitioning to make it so you don't even have to do the check will do wonders.