[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [f-cpu] Status quo



Le 2015-03-28 08:08, Ralf Jonas a ÃcritÂ:
Good morning y'all,
hi !

I think any new design
needs some sort of decimal arithmetic
...

I don't agree at all, for many reasons,
but the right place to discuss this is usenet:comp.arch.

Whoaa... Why so rude?

because this would be the start of one of those countless, pointless
overly detailed yet useless and tiring "discussions"
that made me let this mailing list fall into slumber a decade ago.
Honestly, I believe that the history of the BCD format is better
discussed on comp.arch.

If I remember correctly, the first CPUs were all able to do some BCD
arithmetics. The 6502 for example has had a decimal flag. It could be
set via CLD and SED (CLear Decimal and SEt Decimal). All following
adds and subs were subsequently done in BCD arithmetics.

The Z80 has had the DAA (Decimal Adjust Accumulator) Opcode. It didn't
convert a hex number into a decimal one, but at least it managed the
carry bits. From a 4A it converted a 50. A B2 was transformed into a
12 and a set carry flag.

So it's definitely not a "new" feature, but a rather old one.

I agree it's old. However please name significant CPU architectures
created in the 80s that had explicit BCD support, how and why it was
used, and by whom.

On the other hand, it might come in handy to have some BCD features
while creating a library for handling abritary long numbers.
no need of BCD for this, plain binary works well.
Calculating pi
neither (look up the avanced algos)
large primes for RSA keys
No BCD there either, it's pure binary again.
whatever.
please name ?

And the best of it: It wouldn't disturb the design of the f-cpu at
all. It's written in VHDL / Verilog at first. So we all can activate
or deactivate a few lines of code via a software switch to enable or
disable a BCD unit in the f-cpu.
it would impact
 - complexity
 - testing/verification
 - software
 - latency prediction of the logic
    (BCD has a different carry propagation than plain binary)

Furthermore, you names IBM, as "real" computer. What line/family ?
Those that have explicit BCD support are designed for legacy execution of COBOL.
If you want to make financial computing, it's your right...

But F-CPU, 15 years ago, was designed for different purposes and
was inspired by some Y2K era architectures, such as DEC's Alpha.
I would like to resume development of the FC0 one day, even streamline
it a bit because its complexity prevented us from completing it.
I probably should rename it, so people understand that it's
not the F-CPU as it was in 1999. I made organisational mistakes then,
and I try to avoid them for the 2015 "reboot".

Just my 2 cents....
I don't mean to be blunt, I just want to keep focus on things that work... These last years, I've created a "toy" CPU that is now "Mandelbrot-complete",
with an online IDE and VHDL, but somehow, the F-CPU "team"
couldn't go beyond the definition of an instruction set.
I hope to change this :-/

Ralf
YG
*************************************************************
To unsubscribe, send an e-mail to majordomo@xxxxxxxx with
unsubscribe f-cpu       in the body. http://f-cpu.seul.org/