[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [pygame] opengl from python - drawing wireframe shape colours and contour in the same time



You can draw wireframes instead of fills by setting:

glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)

Then draw your geometry as desired.

Note this has the same limitations and drawbacks as GL_LINE, but for simulating old-school vector graphics it works pretty good. You can control whether or not a particular edge is drawn using edge flags. That way interior edges that exist just for tessellation, don't clutter up the rendering.

What typically looks good is to do a first pass at low alpha with a wide glLineWidth, then render again with a higher alpha and a lower width. Last draw with line width set to 1 (or whatever) and full alpha. Gives a decent glowing vector line effect. Also be sure to enable multisampling to combat aliasing.

I attached a screenshot of a model rendered using this technique. The code that produced it can be found here:

http://code.google.com/p/caseman/source/browse/trunk/eos/tools/show_obj.py

PNG image





The libraries it uses to load the model and draw it are there as well.

hth,

-Casey


On Jul 27, 2009, at 2:06 PM, Paulo Silva wrote:

no, my idea is doing simple poly games, a bit like those 3d games from
Amiga time (such as Robocop3 and many others) - very flatcoloured -
but i really wanted so have the meshes wireframed - thanks for all
useful info! :)

On 7/27/09, Ian Mallett <geometrian@xxxxxxxxx> wrote:
Yes, unfortunately. That may be a problem, especially if the object is high poly. There are more complex methods that use shaders. One of the better
ones I've found is:

Render the scene to two textures, one RGB and the other depth
Apply an edge-detection filter to the depth texture (no edge = white, edge =
black)
Multiply the two textures together on a fullscreen quad.

The problem here is you'd need shaders if you want to only have one pass, which may be overkill. (MRT to render to both textures simultaneously or storing the depth in the alpha channel of the texture). If you want to use fixed function, you'd need two render-to-textures; one for each texture--and so there wouldn't really be an advantage to this method over the original
one.

There's also normal-based edge detection, which I have tried as well. It's simpler, though you'll probably need a shader here too. This unfortunately suffers from line-thickness, which may or may not be a problem. You'd only
need one pass though.

HTH,
Ian