[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [f-cpu] Re: Floating-Point?



Hans Summers a écrit :
> 
> > Hans Summers a écrit :
> > >
> > > > On Thu, Aug 16, 2001 at 01:49:27PM +0800, Glenn Alexander wrote:
> > > > > Hi. This is my first post to this list. My background is as
> > > > > a hardware technician, not a chip designer so bear with me.
> > > > >
> > > > > On Thursday 16 August 2001 03:41, you wrote:
> > > > > > Michael Riepe a écrit :
> > > > > >
> > > > > > In sparc, there is no 80 bits float but 128 bits (2
> > > > > > registers used in the same time). We don't need 1 cycle
> > > > > > multiplication so it could be done for the fcpu.
> > > > >
> > > > > I am thinking that taking the two-register approach might be
> > > > > over-complicating matters. Since F-CPU is intended to later
> > > > > be scaled above 64 bits, if someone wanted 128-bit floats
> > > > > in the future they would impliment a 128-bit F-CPU.
> > > > > Especially for the FC-0 and probably for the FC-1, KISS
> > > > > (Keep It Simple for us Stupid people).
> > > >
> > > > 128-bit `quadruple precision' (like SPARC) is IMHO the
> > > > way to go, but not in the FC0.  For now, let's stick to
> > > > 32-bit and 64-bit (with 80-bit `double extended' used
> > > > inside the FP unit, to maintain IEEE compliance).
> > > >
> > > Could someone explain to me why 128-bit FP is desireable? I
> > > am struggling
> >
> > It's very easy : almost all scientific calculation ! This include
> > electric simulation (spice), aerodynamic, and so on... For a lot of
> > mathematician 32 bits are a none sense ! For laughing, they said that
> > they didn't want to take a plane any more ! Don't forget that every
> > compiler defined float as double (64b) by default. With 32
> > bit there is too much rounding problem. The only killing application
> > for it is the image processing. That's wy some of them want 256
> > bits fp number.
> 
> Sorry I still can't see it. Which electric or aerodynamic simulations
> require 30 decimal places of accuracy?
> 
> In my scientific days I never came across a calculation needing anything
> like 128 bits of precision. "Almost all scientific calculation" seems to me
> like a huge exagerration. The limits of measurement are often to be measured
> in percent, so why would you want to calcuate to 30 decimal places when you
> can only measure to one or two? As for electrical simulation, when resistors
> usually have a 1% error, and capacitors 5 or 10%, where are the 30 decimal
> places needed?
> 
> Fine, for the sake of comfort, use doubles (64-bit). And I am sure that some
> areas genuinely need more precision. But I still believe the number of
> applications requiring this much precision to be very small. For a general
> purpose processor implementing 128-bit FP is a waste of resources.
> 

1) IEEE said to use only 64 bit never 32.
2) I have read in french scientist magasin ("la recherche") an article
about fp number they give an exemple of "a suite" ( sum(S(N)) ). The
limit is 6 by calculating by hand but every computer calculate 100
because of the rounding problem. 
3) Have you heard about chaos ? That means that all the decimal from far
away in the number come to the main value (imagine (a-b)*c, what happen
if a and b are really clause to each other ? and if c is a great number
?).

Most of the time, the calculation are made by recurence (french word ?).
So rounding is a very hard problem.

nicO

> *************************************************************
> To unsubscribe, send an e-mail to majordomo@seul.org with
> unsubscribe f-cpu       in the body. http://f-cpu.seul.org/
*************************************************************
To unsubscribe, send an e-mail to majordomo@seul.org with
unsubscribe f-cpu       in the body. http://f-cpu.seul.org/