[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: gEDA-user: HELP with Icarus Verilog, 0.7!



Hi,

Samuel A. Falvo II wrote:
On Tuesday 06 April 2004 05:15 pm, Mike Jarabek wrote:
  
You are correct, it's not always a bad idea.  I did not qualify my
statement adequately.  When you use the clock as part of your logic
you automatically double the frequency that your circuit must operate
at.  This is probably not a big deal when your clock is under 20MHz,
but it certainly affects your design at speeds like 100MHz.  If the
    

Oh, well, yeah.  :)  No arguments there.  In fact, I'm surprised that 
modern microprocessors don't have input-only and output-only buses 
exposed at the physical pin boundary.  I know it takes a lot of pins to 
be able to do so, but egads, with 400+ pin BGAs, you'd think it would 
draw some attention.  I suppose point-to-point buses like HyperTransport 
offer similar functionality to having split, fully synchronous 
interfaces though.

My ultimate goal is to produce an asynchronous, MISC architectre CPU that 
I can use in a future design, specifically because the reduction in EMI 
and power consumption more than outweighs its disadvantages.  But that 
is a very, very long-term project for me, and is not a high priority.  
My immediate priority is to deliver my product via the 65816 
microprocessor, something that I know will work, and do so within 
budget.  Seeing fully asynchronous cores on opencores.org gave me the 
confirmation that such a thing is possible.
Sounds cool, but very difficult.
<SNIP>
Using a clock as a qualifier inside of an FPGA can be dangerous.
Especially if you are using a look up table based FPGA, if one of the
signals is asynchronous to the clock, glitches can appear on an
    

Forgive my naivity, but why is this?  This seems like such a basic thing 
to be able to do.  I know that many FPGAs have dedicated lines to 
support a clock, complete with clock skew compensation buffers 
throughout the chip.  But what if the clock were also routed to a data 
input, for use in a border interface of some kind?
This situation arises because the various inputs to the LUTs have differing Tpd's to the output.  The LUT has an `n' bit decoder on it that selects the appropriate row in the LUT, this decoder takes time to settle, and may behave badly, with respect to glitches on the output, if too many inputs are changing at once.  There are resources inside FPGA's that can be used.  For example, to select one of two clocks in a Xilinx FPGA you can use the three state resources, or instantiate a `MUXCY'.  Either way avoids the LUT, which is where the fundamental problem lies.  You cannot rely on the synthesis tool to infer the real mux when the LUT is what is normally used.

<SNIP>

Mike
-- 
--------------------------------------------------
                              Mike Jarabek
                                FPGA/ASIC Designer
 http://www.istop.com/~mjarabek
--------------------------------------------------