[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [f-cpu] F-CPU architecture...

Hi Michael, Hi Yann,

Michael wrote:
My collegue develops a technique to test a processor via a Software Based Self Test that yields very high fault coverages at low power and short test times.
Sounds interesting. How does that work, approximately?

Using structural faults she genrates a program that when executed detects the structural faults.
Like a fault list -> code compiler!

Yann Guidon wrote:
Tobias Bergmann wrote:

Yann Guidon wrote:
Do not skip testability issues just because your prototype stage will be FPGA! The LEON made this mistake and was hard to port to ASIC!

don't worry, the BIST is an integral part of the architecture.
it's just that we need to find a way to verify the chip "when" in ASIC form,
so it does not use any gate in FPGA. A flag in the config file does this.
The other trouble is that, knowing how the SW world works,
ROM space will never be enough (who remembers the Xbox
fiasco ?). So to avoid bloat, the idea is to create a kind of LFSR
that sends pseudo-random signals to all units,
and reuse the integrated POPCOUNT unit to create a "signature"
that will be checked at the end. That's simple, fast, rather efficient
but the problem is to generate a LFSR that will give 99% coverage.

Random pattern test will hardly give you 99% at short test times but you can always prove me wrong ;P

So, in production, we can resort to a second level of checks,
using specific code loaded from outside in the cache.
but no need to do these extensive checks at every reset, right ?
Once a chip is considered "good", it usually remains so for a long while.

Oh I even foresee chips testing themselves continuously during operation.
But I agree that this is not yet be neccessary and a random test + a very short deterministic functional test (SBST) is the most cost efficient for F-CPU.

Although in the long-term I'd skip the random test as it generates too much heat!

i've read some papers about it, years ago.
Some even proposed to include specific instructions
to help "boost" the coverage.
Anyway, the POPCOUNT unit is an integral part
of the system. It is not only useful for crypto, it also
help in signature compression (it takes 2 operands
from the data bus, XORs them together, yields a 6-bit
result, and this goes to "disturb" a "freewheel" 64-bit LFSR
(which can also serve as a weak pseudo-random generator
in practice).

I want to get rid of random tests but as written above it's a avlid compromise for now.

I'm even thinking about putting a simple RISC like a LEON on die as well and let it handle the I/O and selftest. If done it switches to I/O pass through mode :)

i'm working on http://f-cpu.seul.org/whygee/VSP/ which can do exactly that. But not on the same die.

nice :) Why not on the same die?

And don't tell me: "OMG. So much die space wasted!" If F-CPU is to be high end then the size of a wasted LEON is almost 0 in comparison!
it's not a matter of dies space. F-CPU is not /that/ large compared
to today's cores. It's a matter of I/O.
Multicore dies are fun as long as one is not limited by memory bandwidth
and pin count. F-CPU is meant to be cheap, so this matters a lot.

Why should it be limited by pins and BW? I said high performance and I mean it. 1000 pins + >10GB BW.

If you can afford only 250 or 300 pins for the package,
what is better ? A large die with several cores (expensive
because exponentially more prone to defects) which compete
to access external memory ? Or a cheaper, smaller die (well,
it's just a consequence of only one core) with all the memory
bandwidth for itself alone ? If one is going multi-core,
the second solution looks better to me : cheaper, more scalable
(you can tune how many CPUs you want), and all the cores have
their 'private' memory bandwidths, which becomes scalable
(just add modules containing CPU+memory).

Of course we have to consider the constraints. The prototypes won't have as many pins and BW as I suggested above.

If you have spare die space, just boost the L2 :-)
(i have an idea about how to make this scalable, fast, multiport
and more importantly : fault and fault tolerant :-P)

nice :D Do tell.

You know that the largest bottleneck in recent CPUs
is the external memory bandwidth. If you add more cores
on die, you better have to execute CPU-bound code.
but most today's codes are memory-bound.

There is some CPU-bound code as well. And latency bound code. What a variety! :P

However, nobody will come after you if you put
X FC0s and Y LEONs on the same die/FPGA
(whatever X and Y). I simply wonder how you will feed
them with instructions and data without resorting to
expensive multi-chip modules :-P

With a latency tolerant design we won't need huge caches and huge bandwidth.
Sad thing that F-CPU is not AFAIK.
But it is SIMPLE. And that's good. A very good test case for free HW development.

bis besser,
To unsubscribe, send an e-mail to majordomo@xxxxxxxx with
unsubscribe f-cpu       in the body. http://f-cpu.seul.org/