6

I am trying to work with the np.longdouble dtype in my Python code, and am trying to use NumPy to manipulate long doubles that I get from a C module compiled with Cython.

Suppose I do this:

import numpy as np

print np.finfo(np.longdouble)
Machine parameters for float128
---------------------------------------------------------------------
precision= 18   resolution= 1e-18
machep=   -63   eps=        1.08420217249e-19
negep =   -64   epsneg=     5.42101086243e-20
minexp=-16382   tiny=       3.36210314311e-4932
maxexp= 16384   max=        1.18973149536e+4932
nexp  =    15   min=        -max
---------------------------------------------------------------------


a = np.longdouble(1e+346)

a
Out[4]: inf

b = np.longdouble(1e+347)

b
Out[6]: inf

c = a/b
/usr/lib/python2.7/site-packages/spyderlib/widgets/externalshell/start_ipython_kernel.py:1:
RuntimeWarning: invalid value encountered in longdouble_scalars
  # -*- coding: utf-8 -*-

c
Out[8]: nan

a.dtype, b.dtype, c.dtype
Out[9]: (dtype('float128'), dtype('float128'), dtype('float128'))

In essence, it is linked to the same issue as in this question and I understand that Python first converts the 1e+346 into a float, whose represntation would be inf. However, can someone suggest a workaround? Is there a way to create NumPy longdoubles that are not converted to floats first?

I have a C module that can output long doubles, which I want to use in a numpy array of dtype np.longdouble.

Even if the solution involves re-compiling Python/NumPy, I am willing to try it.

Community
  • 1
  • 1
Abhinav
  • 258
  • 1
  • 6
  • 17

1 Answers1

6

There are a few things you might want to take into account.

First, this is a mess. NumPy knows longdouble and float128. Unfortunately, the names are misleading, the underlying implementation is C long double, which is usually (but not necessarily always) an 80-bit float. (Actually you can see it here by looking at "precision"; 18 digits is approximately 60 bits, and the 80-bit float has 64 bits in its mantissa. The precision would be around 34 digits if real 128-bit floats were used.)

There may not be any direct way to pass long doubles as arguments to a C function, but if you pass pointers instead, then you can avoid the problem. For example, you may pass your array data as uint8 (by using myarray.view(dtype='uint8')) and the cast the pointer to the buffer into long double * in your C program. At least then Python has nothing to do with type conversions. (Most probably you do not need to take the view because you are, after all, only exporting a pointer to the array buffer.)

Just beware that this trick relies on the compiler having the same kind of type settings when compiling Python and your C program. In addition to precision differences, there may be byte order differences (rarely if the programs run in the same machine) and alignment differences. My Python seems to align the longdouble items at 16 byte borders (i.e. there is always 6 bytes of zeros per each element), but the C compiler may use 10/12/16 byte alignment.

As far as I know, the details are implementation specific. So, this is doable but requires some extra care and there may be portability issues.

AGN Gazer
  • 7,080
  • 2
  • 16
  • 39
DrV
  • 19,376
  • 4
  • 53
  • 67
  • 1
    Ok, thanks for the answer! I actually don't mind the inputs to the C function to be normal doubles. However, I do want the output to be treated as a long double. I guess I will think of other ways of doing it by taking logarithm or so (it is not so much the precision I want as much as the range) – Abhinav Aug 25 '14 at 08:23
  • With a bit of luck, you can create the array easily just by using the C long double type. However, if you bump into alignment problems, then it becomes stickier. – DrV Aug 25 '14 at 10:46