[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [f-cpu] "Tree"
- To: Bruno Bougard <firstname.lastname@example.org>
- Subject: Re: [f-cpu] "Tree"
- From: Juergen Goeritz <email@example.com>
- Date: Wed, 9 Jan 2002 16:10:23 +0100 (MET)
- Cc: firstname.lastname@example.org
- Delivered-To: email@example.com
- Delivered-To: firstname.lastname@example.org
- Delivered-To: email@example.com
- Delivery-Date: Wed, 09 Jan 2002 10:11:23 -0500
- In-Reply-To: <3C3C4B59.AB3BFC6C@imec.be>
- Reply-To: firstname.lastname@example.org
- Sender: email@example.com
> > thank you very much for this explanation. Since I am from both
> > worlds (hardware and software) I do not completely agree to your
> > statement though. Let me take you (all) onto a short excursion:
> > The top-down approach reflects the division of the complete
> > problem into smaller pieces. The bottom-up process means the
> > implementation and integration of small pieces to build a whole.
> > Only both parts together will ever be able to produce a result.
> > I don't know of any development, where the spec wasn't changed
> > during the development process (or held incomplete) or the
> > technology was adapted to reach the specified goal. I am not
> > talking about imitation (re-implement of existing chip) here.
> Ok for that, it is not incompatible with what I told, in fact. I've never
> told that you shouldn't iterate in the design process. This is often
> necessary, but that stay top-down. But ok, don't make it too philosophical,
> it depends how you see it.
Why? If you look at the design of man. You got one half
of the brain working in top-down (analyze) and the other
half in bottom-up (integrate). There has been a lot of
experiments going on to get an idea how the brain works.
If you don't use both halves you be nothing and hardly
achieve anything. Okay, now I somehow got philosophical ;-)
> My main message is : 'keep the abstraction level well separated' and
> do the things step by step.
The abstraction inside a design is only needed as long as
you don't overlook it as a whole. That's why you have to
use top-down as long as you exceed your own possibilities
i.e. imagination and use bottom-up when you can grasp it.
> > But let's go back to top-down. You have to define the outer
> > interface and function of your 'whole' system first, then you
> > can start fill it with something - the division into smaller
> > parts for which you also define the interfaces and some idea
> > about their functionality. This is a recursive process until
> > you end up with trivial entities. In our case - macros or gates.
> > There a quite a few possibilities to divide the functionality.
> > The last part of this process is handed over to the synthesizer.
> > And as you stated above, professional IC designers know how
> > their synthesizer works and let it do the rest (optimization)
> > for ease of design description. They adapt their writing to the
> > synthesizer to get the best result. I see this as symbiosis
> > with and focusing on the synthesizer.
> More or less ...
> > Why is it such a problem to optimize the design?
> > This is due to the two-dimensional structure of the problem
> > (inputs vs. one output). There are ways to reduce such a table
> > into an AND/OR/NOT representation in a really short time (this
> > is PLD talking) - optimzers like FACT (single bit) or ESPRESSO
> > (multi bit) do it very nicely. By the way I also worked with
> > a multi bit optimizer called BRUNO in those days.
> I didn't know that I have the same name than an optimizer ;-)
Maybe you can get a LOG/iC PLA package from somewhere to test it. ;-)
> > Now come to ASICs and FPGAs:
> > Since n-input AND(OR) gates are not available (or hard to
> > implement with a good timing) on the ASIC other algorithms
> > were developed. These algorithms try to find other patterns
> > inside the table. XOR detection is one of these. Most very
> > small gates do have some negated inputs or outputs. When you
> > can use them in the design the size will be less than with
> > the other non-inverting gates.
> > Thus the synthesizer tries (and I mean it!) to find an optimal
> > solution. But there is a pitty - the gate fanout (the power to
> > drive other gates). The smallest gates only got low fanout.
> > Higher fanout costs additional size. AND - the gate delay is
> > fanout dependent. This multiplies another 2 dimensions into
> > the problem. And we are still talking about a single output bit.
> > Similar patterns may be used by multiple output bits though...
> > Delay and size optimization is the way, but where is the optimum?
> > And after that you still have to place those gates onto the chip
> > and wiring does also take size and adds delays and crosstalk...
> Ok, that's a nice talk. I think your message is 'synthesis is a complex
> problem', isn't it ? I fully agree.
> Just one remark. The synthesizer doesn't try to find an 'optimal' solution
> but a solution that match with the constrains you give to him. This is quite
> different in the mathematical point of view.
Yes you are right. But the constraints give the definition
of an optimum solution ;-)
I don't think that meanwhile anybody prooved that there may
be an optimal solution where size and delay both are minimal.
By the way that reminds me of the relation between mass and
time. But this one must have got some sort of a minimum.
Off-topic again ;-)
> In fact, what you tell is the best justification for what I try to say: the
> problem is complex, the best way to handle this complexity is to work in
> abstraction layers. One after the other and with feed back when needed.
Yeah. But why do you think it's impossible to write another
optimizer fitted for F-CPU? The biggest problems, like automatic
state assignment or the automatic computation with different
flipflop types are not yet solved because of the high computation
time with existing tools.
And is there a tool available at all to verify design flatness?
To unsubscribe, send an e-mail to firstname.lastname@example.org with
unsubscribe f-cpu in the body. http://f-cpu.seul.org/