[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [f-cpu] Winograd DCT on my seul.org account
On Sat, 20 Apr 2002, Yann Guidon wrote:
>Juergen Goeritz wrote:
>> On Sat, 20 Apr 2002, Yann Guidon wrote:
>> >Juergen Goeritz wrote:
>> I spent a long, long time of my life optimizing code for
>> processors - maybe you understand when I say, no more!
>> I want an automatic approach.
>i did not spend my life doing this but as ar as i remember, i have
>spent more than ten years trying to work around a lot of problems.
>i too say "no more". However you can't solve a problem you're foreign to,
>and you can't automate something you can't do yourself.
>A lot of my efforts are an attempt to understand all i can from the
>matter and find realistic solutions. it requires some abnegation,
>such as watching over this list for example ;-P
Nor did I spent my whole life with it :-)
In the beginning I coded in assembler only. That was
around 1975 when I was 14. And I agree with you -
one has to know about the things to find some good
solutions. And to get there you have to try and try,
make a lot of mistakes and learn and learn from them.
>> >very nice comparison, even though i didn't think about it :-)
>> Maybe you just didn't see it ;-)
>it was not the original idea (giving gcc an over-simplified view of the
>processor and doing the rest).
Yes and it's done everywhere - of course, even on
>> I would not try to reduce the input capabilities for the
>> developer. Lately I ported a pascal program to C that was
>> heavily using overlays. It was a bit tedious. If I had
>> a pascal compiler with linux capable of overlays I could
>> have just saved me this work. And imagine you want to
>> rewrite the whole fortran library stuff. Better don't!!!
>however, i found that expressing algos in VHDL was not so difficult,
>though it needed some leaning. But it was easier to learn VHDL than C
>(for me). C is used everywhere and one feels a bit forced, but VHDL
>is often tought very superficially and the benefits are not apparent.
Please don't code software in VHDL - but this is just
my personal view and you could argue with me about it.
I learned C because it is THE U**X language. And most
(mostly embedded) systems I worked with had a need for
task switching operating system with dynamic driver
handling. Basically I am familiar with the whole tree
of different U**X versions and did a lot of driver
development, system portation and bring up testing in
the network area.
>> In general I propose an additional step in the toolchain
>> compile/optimize - assemble - link - optimize - load.
>i seem to have understood something like that.
>Now my idea was something like
> cpp - gcc - as=>simulates the dumb CPU and optimize<= - link - load
We are not so far apart. Just put 'link' to the left, too.
Linking is a dumb task that need not be addressed by your
optimization and thus should better be left out. Its like
cpp - gcc - as - link => for the dumb CPU => optimze - load
>maybe if we provide a simplified ISA to gcc, compilation and debug
>would be simpler...
Hm, hm! Debugging is depending on whether you let gcc
use a real opcode subset very simple. Otherwise you
have to have a sort of '-O0' switch in your afterburner
optimizer. Debugging on heavy optimized programs brings
a hack of invisibilities anyway and thus could be skipped.
If you debug you want to see as much as possible!
In case of verification of heavy optimized program code
you want to take a look into the assembler anyway and do
not debug on the high language level...
When I think back over years of work with gcc, there were
compiler versions where you couldn't use all optimization
switches because of malfunction in optimization. With new
CPU support I see a similar problem coming. That's why
I was proposing previously to build a basic subset block
of f-cpu opcodes (which already make a working processor)
to create a simple model for gcc that could be verified by
some automatic means. And you may allow only specific
optimizations inside gcc which do not spoil your second
>> Some years ago there were some developments like this
>> by the Amiga guys. They kept two versions of code in
>> their system. A special code file (hypothetical assem.)
>> that was converted to processor code by the loader at
>> activation time. The system kept both versions and did
>> only exchange the unconverted ones with other systems.
>> But they mixed their idea up too much with parts that
>> cracked system reliability finally. The part of post-
>> optimization and adaption to the processor was a good
>> idea though and should be proceeded by f-cpu, of course
>> in an adapted manner. ;)
>well, if you have enough time to hack that, go ahead...
Hey, I say that your afterburner optimization approach
is exactly fulfilling that...
>but before you optimise, take some time to learn how
>to do that. I know that i don't write enough papers
>about it, but you already know that F-CPU is another
>new type of beast which needs its own techniques
>and mindset. It was designed almost as a Trojan Horse,
>seeming quite similar to others (to not scare the execs
>out there) but the guts and programming habits have
>probably no equivalent.
When you have finished your next paper (explaining it
in more detail and other optimzation hints) I could
probably convince myself to start to code such a beast...
>As a "HW guy", you certainly have enough background
>to understand most things, so i'm only worried about
>my own ability to teach them.
Good teaching lessons BTW ;-)
To unsubscribe, send an e-mail to email@example.com with
unsubscribe f-cpu in the body. http://f-cpu.seul.org/