[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: gEDA-user: gEDA interview with DJ at DevCon



On Wed, 2010-10-27 at 16:53 -0400, DJ Delorie wrote:
> for your amusement...
> 
> http://www.eevblog.com/2010/10/27/eevblog-121-geda-interview-with-dj-delorie/
> 
> I probably got the odd fact or two wrong, but I think it went well...


Sounded really good DJ, thanks for mentioning the GL stuff! I should
really get that upstreamed at some point. Some of it has a way to come
yet though, but it is not as far as it once was ;)

It was a slight shame the video started with an advert clip showing
Altium's 3D view in action. The guy who interviewed you has a pretty
decent tutorial document on PCB design which the CUED people have had on
their boot-able linux engineering CDs for a while now. I came across it
again when searching to see if he had some commercial ties to Altium. (I
didn't find any, other than that he clearly knows Altium and sometimes
posts responses to users seeking help with it).


I've been playing with performance optimisation recently, using VBOs to
upload data to the GPU, caching polygon tesselation results from the
sweepline algorithm, and generally playing around with how the branch
renders things.

I implemented a pixel shader to compute the shading for rounded line
caps and filled circles / vias etc.. on the GPU, which seems like it
might be a win. Rather than using a huge triangle fan to approximate the
circular geometry, you just upload rectangular geometry with appropriate
texture coordinates which map onto a (virtual) circle.
 __________________
|\ |   _____=== |\ |  (E.g. a line has 6 triangles, 
|_\|===_________|_\|     not 2 + LOTS for the caps)

The pixel shader computes the distance to of the interpolated texture
coordinates to the circle origin and discards pixels where the distance
is >1.0. This kind of progmatic texture allows the same geometry to
render at all scales without loss of quality.


I've tried rending board layers into FBO textures and texturing simple
quads too. This is faster for some cases, but slower for others, and
will trade GPU time for storage space. It allows a nice quick 3D view
without constant re-rendering of all the geometry in every frame, but it
does give some texture aliasing (I'm not generating mipmaps at this
point). Really, at some perspectives it is better just to re-render onto
the buffer, otherwise you need a large hi-res texture to get the detail.

I think for 2D view (which is obviously a common case), we could
potentially use the layer textures to speed up rendering, as a single
pixel shader (or perhaps even fixed-function multi-texturing) can
perform a blend of multiple (or all?) layers in one rendering pass.

This should be a win for frames where cached layer textures can be used
without needing to redraw them due to geometry changes. However, it will
depend on the memory bandwidth / resources available to the texture
sampler just how much (if anything) this gains you.

Unfortunately, intel GPUs (as I have) aren't so performant, so it is a
struggle to get really stellar performance from any method. If I use
pixel shaders for drawing circular geometry, you get more over-fill from
discarded pixels around edges of the objects. If you go triangle fans,
you end up with huge amounts of geometry to send the GPU. It is pretty
tricky to figure out where the GPU bottle-necks are too.


It would of course be nice to support fall-backs for cards which don't
support programmable pipeline stuff, so some sort of tri-fan / image
texture based approach will always be necessary if pixel shaders aren't
available.


There are some changes I need to make in PCB's core to properly exploit
the performance attainable by caching geometry in the GTK/GL (and
perhaps other) HIDs:

1. Add specific damage calls which identify which layers and/or objects
need redrawing. Caching by layer is an obvious thing to do, since we
typically only edit layer by layer. Only adding / changing pins / vias
will damage all layers at once.


2. Drop the core's blanket "redraw area" call to the GUI. It doesn't
allow the GUI to cache unchanged geometry between frames. It should be
simple to restore the existing behaviour of GUIs by having them issue
their own invalidate / repaint / queue-repaint calls when they are
notified of geometry changes.

3. Add hooks for GUIs to store cached data against objects. This might
just mean adding HID hooks for the GUIs to be notified of object
creation / deletion so they can maintain geometry caches hashed against
object ID. For testing so far, I've just bastardised the core
data-structures with some extra fields I needed.

4. Probably inevitably.. split more of the drawing / look+feel policy
out of the core and into hid/common/ drawing helper functions. The GL
HID has to copy and re-implement various drawing routines to circumvent
incorrect ideas about drawing order the core will impose otherwise.

In effect the GTK/GL GUI almost completely bypasses the core rendering
for many things, so it would be nice to give other HIDs the flexibility
to do so as well. Make the policy detached from the useful drawing
routines which the GTK/GL HID (and others) could still use. This may
mean exporting / moving some functionality from src/draw.c


Best wishes,

-- 
Peter Clifton

Electrical Engineering Division,
Engineering Department,
University of Cambridge,
9, JJ Thomson Avenue,
Cambridge
CB3 0FA

Tel: +44 (0)7729 980173 - (No signal in the lab!)
Tel: +44 (0)1223 748328 - (Shared lab phone, ask for me)



_______________________________________________
geda-user mailing list
geda-user@xxxxxxxxxxxxxx
http://www.seul.org/cgi-bin/mailman/listinfo/geda-user