3

I'm attempting to simulate a certain physical system. In order to propagate solutions I need to be able to multiply matrices of determinant = 1 which describe each part of the system. In the code below T(variables) is a 2-dimensional matrix with det(T) = 1, i just indicates the region number and the rest is irrelevant.

When I run this code for systems with more than 30 regions, the final Msys no longer has determinant = 1. I checked the value of the determinant of Msys throughout the calculation and it's 1 for the first few iterations but then it starts diverging from that. I've tried putting dtype = float64 when creating the array T to see if this would improve the precision and stop it from breaking down but I saw no improvement.

Is there any way I could write the code to avoid the error accumulating or a way I can increase the amount of decimal places numpy stores so to make the error negligible for systems with 100+ regions.

for i in range(n):                                  
    if i == 0:                                      
        Msys = T(L[i],i,k)
    else:                                           
        Msys = numpy.dot(T(L[i]-L[i-1],i,k), Msys)
return Msys
Zykx
  • 183
  • 1
  • 2
  • 8
  • 2
    If it's an option, [sympy](http://stackoverflow.com/questions/6876377/numpy-arbitrary-precision-linear-algebra) has arbitrary precision – jterrace Jan 25 '12 at 23:14
  • Thanks! Ended up using mpmath which is part of sympy. – Zykx Feb 13 '12 at 16:52

1 Answers1

6

All Floating point operations have limited precision and errors do accumulate. You need to decide how much precision is "good enough" or how much error accumulation is "negligible". If float64 is not precise enough for you, try float128. You can find out the precision of the float types like this:

In [83]: np.finfo(np.float32).eps
Out[83]: 1.1920929e-07

In [84]: np.finfo(np.float64).eps
Out[84]: 2.2204460492503131e-16

In [85]: np.finfo(np.float128).eps
Out[85]: 1.084202172485504434e-19

Here is a lot more info about floating point arithmetic: What Every Computer Scientist Should Know About Floating-Point Arithmetic

Bi Rico
  • 23,350
  • 3
  • 45
  • 67
  • There must be TONS of floating point questions here on SO. I remember some examples where you could prove that 2 equations was equal. Only that in one of the equations they got errors because they had divided with a very very small number (as covered in most textbooks on scientific computing). – r4. Jan 25 '12 at 22:29
  • Also remember machine epsilon (that there actually is a smallest number the computer can represent). – r4. Jan 25 '12 at 22:31
  • Those still weren't accurate enough but I found another module called mpmath that lets you choose the prevision you want. Thanks for your help! – Zykx Feb 06 '12 at 20:43