[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] Python and Speed



On Thu, Apr 17, 2008 at 12:39 PM, Brian Fisher <brian@xxxxxxxxxxxxxxxxxxx> wrote:
I actually think the line of thinking I read in the comment above (thinking that I shouldn't have to optimize things, because the stupid language is slow) is in fact a counterproductive attitude in application developement, and misses the real point.
Obviously it is the problem of the programmer--it is the programmer who programs his program, not Python.  Python just executes it.  But the fact is, it makes a programmer's job easier if he has unlimited power to work with.  Currently, I find myself having to stretch Python's limits, and, as you say, find optimizations. Programming is fun, but rewriting code so that you can meet a reasonable performance benchmark is not.  
It would be nice of course if python ran much much faster, but it runs slow largely because it is designed to give you the flexibility to code complex things much easier.
I like this aspect of Python--its flexibility, but I object to the lack of speed.  I want it all--flexibility and speed.   
If you don't want that flexibility, then you need to "turn it off" with pyrex and extensions and all that kind of stuff.
I actually can't think of any situation where I would really need to do that.   
However sometimes that flexibility actually lets you code more efficient approaches to begin with.
...because the code is more clear.  Better looking code usually runs faster, because clear code allows one to see any performance sucking bugs... 
Ultimately all slowness is the programmers problem, not the tools['].
Of course.  The programmer is the one who makes the program.  The users would complain to the programmer, not Python, and, uh, they do. 
If a particular tool is the best to help you solve the problem, then it should be used. With python, coolness is always on, so it's cheap to use coolness. C++ was designed to make you not pay for anything you don't use, which means coolness is default off, which means it's really hard to use coolness.

...to get to brass tacks though, I've found that the majority of the real slowness in _anything_ I code is due to the approach I take, and much less so due to the code. For example, pairwise collision between a set of objects. If every object needs to be checked against every object, that's an n-squared problem. Get 1000 items, that's 1,000,000 collision checks. But lets say I do object partitioning, or bucketing or something where I maintain sortings of the objects in a way that lets me only check items against ones that are close to it, and I either get log(n) partitioning or maybe I get at most about 10 items per bucket (both very achieveable goals). Now it means I only do about 10,000 (10*1000) collision checks for the same real work being done.
This is work that must be done.  To do this in my case would be somewhat complicated, as I would need interpartition testing, boundary testing on the partitions and on each point, and various other modifications.  Of course the code could be made faster, but this is something that I would have to do to get this program functioning at a good speed.  Why not make Python faster, making such annoying modifications unnecessary and speeding up all Python in the process?
So lets say that my python collision code takes 100 times as long as my c++ collision code - that means if I do the optimization in python, I can get the python code to go just as fast as the C code without the optimization. Not only that - lets say I decide I want to step stuff up to 10,000 items with pairwise collision - now it's 100,000,000 checks vs. like say 100,000 based on the approach - now python can actually be 10 times faster.
That's an optimization which takes time and effort to implement.  A C programmer very often has no need to do such optimizations, though he works with code I find horrid by comparison.
So now the issue becomes whats the cost of writing the more efficient approach in python code vs. writing the naive approach in c++ code. If you think you get enough programmer benefits from working in python to make those 2 costs equal, and the performance of either is good enough, python is the better choice. Not only that, once you've got good approaches written in python that are stable and you don't need the coolness/flexibility, it becomes much easier to port the stuff to C, or pyrex it or whatever makes it much, much faster.
The whole point of using Python, for me, is that it is far more flexible and programmer-friendly than anything else I've seen.  I don't want to have to make a choice between Python and C just on a matter of speed--Python should be the clear choice.  They should be equal in speed, but Python is easier to use.  Obvious choice?  Python.
Sounds like your approach is O(N^2). If most points aren't close enough to do the collision, partitioning to make it so you don't even have to do the check will do wonders.
Again, this problem is merely an example of the speed issues I see with Python.  I've already looked into optimizations like that, but I really don't want to take the time to actually sit down and code them.  If in Python, speed were improved, such modifications just in general would be unnecessary.  It's not just me or this one particular problem.  I have issues with building 3D objects and particle systems that need boosts.  As someone said earlier, everyone wants Python to be faster.  I see the difficulty in speeding it up, but it seems like it can't be that incredibly hard, and I expect that already there are modifications that could be done to Python that simply haven't been.  
Ian