I want to factorize very huge numbers (e.g. 100-bits, 200-bits numbers) with Cython.
Hi everyone, I implemented the Elliptic Curve Method for factorization into Python 3.6. Now, I want to speed up my code using Cython (version 29.13). I'm a beginner in Cython world but I know that Cython works better, if I define type for variables. So, I converted all my Python classes into Cython classes and now I would like to typize variables. I read that Cython automatically convert the "cdef int" declaration into classical Python integer with infinite length but this is not the case.
When I try to factorize numbers like this "5192296858543544183479685583896053", I get:
OverflowError because of "int is to big to convert into C long".
Are there any ways any ways to declare huge integer to speed up my code? The only variables without type declaration are the variables that could be very huge integer.
PS: I have already tried to use the cpython type uPY_LONG_LONG (unsigned long long) but it was useless because I always got the same error.
[UPDATE]
If I declare something like this:
cdef int function():
cdef int a
a = 2**100
return a
I get an OverflowError because of 2**100 is too huge to cast it into an integer.
If I import the long type from cpython, I get the same error:
from cpython import long as Long
cdef Long function():
cdef Long a
a = 2**100
return a
If I import the int type from cpython, I get no error but I have no speed up:
from cpython import int as Integer
cdef Integer function():
cdef Integer a
a = 2**100
return a
If I analyse the C++ code created as translation, I notice that the variable a has been declared as a pointer to a PyObject. This is exactly the same translation that I get if I don't declare the variable. So, maybe in this context there are no difference. I cannot improve all the for loops I use because of I have something like this:
for x in range(p):
.....
But if p is a huge integer and Cython declare p and x as pointers to PyObject, Cython can translate this loop into a C loop to speed up it.