First time I saw this problem, I wondered how it works when I use fractions instead of decimals until the final operation is executed( if the number is not accurate one, then convert it to the fraction like 3.14 to 314 / 100 ).

To examine this, let’s define a function calculating some complex value.

(define (g n) (define (g-iter result i) (if (= i n) result (g-iter (+ (/ (/ (/ 312423 125531) (/ 2052712 7231400)) (/ 13222 113131)) result) (+ i 1) ))) (g-iter 1 1) )

The result of the operation is returned as a fraction:

(19358134129159 * 750189) * 5180721713949/19358134129159

Each term is absolutely determined value.

For getting the decimal number from this fraction value, we just need to multiply this by 1.0.

(* 1.0 (g 10000)) ; 750189.2676250552

If we use the decimal number in the calculation, the different result will show up:

(define (g n) (define (g-iter result i) (if (= i n) result (g-iter (+ (/ (/ (/ 312423.0 125531) (/ 2052712 7231400)) (/ 13222 113131)) result) (+ i 1) ))) (g-iter 0 0) ) (g 10000) ; 750189.2676250958

Then which number is more near the actual value?

Let’s calculate ((((312423 / 125531) / (2052712 / 7231400)) / (13222 / 113131)) * 10000) in the Wolframalpha.

Calc result

750189.2676250551516388667249372079289168657014682584005287499...

It seems that the former one is more accurate.

However, the fraction-version calculation will lose its speed when the number of calculation grows up. If we try to execute function g 10000000 times, the response of the fraction-version one is cruel in contrast to the decimal-version one which outputs the result within 0.5s.