[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: gEDA-user: Free GNU/Linux hardware design tools



Stephen Williams wrote:
You will need some implementation tools from Altera. I believe
the quartus package is free (as in "free beer") and I'm pretty
sure that does *not* include a simulator. It should include Verilog
synthesis.  [...]

Christian E. Jørgensen wrote:
Simulation? Synthesis? Implementation tools? Can you put these in terms
that even a software developer who doesn't know his latches from his
flip-flops can understand? :-)
The engineering flow looks something like this:

* Write source code in Verilog or VHDL using your favorite editor.
Nothing unusual about that, other than the odd language capabilities.

* Write more code, called test benches, for regression testing the code.
With logic, you can't just run it on the command line and poke buttons
in the GUI to see whether it works. Everything happens much faster.
Test benches run either in the simulator or in the chip, as needed.

* Various combinations of source files are sent into the compiler.
Like the "C" compiler, the Verilog compiler is in multiple stages.
However, these are often used separately instead of a single pass.
- Source level preprocessor works much the same way, but different
- Syntax and structure recognition is also similar and invisible

* GCC generates code for a generic processor architecture with
registers, a program counter and a stack. The logic tool
"synthesizes" the "functional design" using a catalog of thousands
of different tiny possible logic circuits with useful capabilities.
- In addition to the binary format, this can usually be dumped out
as a functional simulation file for use with a generic simulator.

* C language inclusion of libraries occurs very late on in compilation.
In contrast, a logic tool "maps" the items in that catalog into the
actual capabilities of the target logic chip and the features it has.
- In addition to the binary format, this can usually be dumped out
as a mapping simulation file for use with a generic simulator.

[ everything to this point can be done, if desired, by Icarus ]

* GCC includes an assembler that generates actual instructions yet
leaves long range references intact. The logic tool collects together
adjacent chunks of logic chip that can be consolidated for speed
and for smaller size, maximizing the efficency of how the data flows
and creating large fixed blocks that can be dropped into the chip.
- In addition to the binary format, this can usually be dumped out
as a block simulation file for use with a generic simulator.

[ chip manufacturers do not release everything needed to do that step ]

* While GCC's linker is primarily associated with finding references,
the logic tool has the additional concern that signals take time to
travel and so long range references are slower. It's like having
one of those processors where your jump and call instructions cannot go further than +/- 127 bytes so multiple jumps are needed to get around.
The task of getting everything connected together efficently is called
"place and route" and aims to meet "timing" requirements.
- In addition to the binary format, this can usually be dumped out
as a timing simulation file for use with a generic simulator.

* In the same way that the runtime image of an executable program has
to be dumped to file in a way that the operating system's loader will
understand, the chip design has to be written to file in a way that
either the ASIC house will accept or the FPGA's loader will decode.
The latter, especially, is a protected trade secret to discourage
people from reverse engineering third parties' logic chip designs.
- In addition to the binary format, this can usually be dumped out
as a routed simulation file for use with a generic simulator.


... note that Icarus is a generic simulator in the context above.
When using Icarus for simulation, the mapping stage essentially aims
to generate a data structure that can be very efficiently traversed
by a normal single processor to figure out what a parallel processing
logic chip would be trivially able to compute all at the same time.

Also, the logic languages have different subsets that are supported
for simulation and for synthesis so it is easy to write code that
can do one and not the other. Planning the regression strategy is
thus not quite as trivial as implied above. Also, the code in the
final target executes a few million times faster than in the simulator
so you need to be very selective about how regression tests execute.
Therefore, although it is convenient to run a test bench in simulation
and get all the "GDB"-like benefits from that, it may run all day.
If the test bench can be written such that it may be used in the target,
the regression test would easily be finished a million times faster
i.e. in less than a tenth of a second. Then, after running dozens
of test benches this way in a few seconds, we have learnt which one
is failing and can just simulate a single one ... watching closely.

For people doing ASICs, the simulation speed ratio is in the billions
so it would take a year for a cluster of several dozen computers to
simulate less than a second of their proposed design's execution.
It costs about $10M to make a single compilation into ASIC silicon.

... hope that helps,
Alex.