If you wanna to be as fast as C you should not use list with python-integers inside but an array.array
. It is possible to get a speed-up of around 140 for your python+list code by using cython+array.array
.
Here are some ideas how to make your code faster with cython. As benchmark I choose a list with 1000 elements (big enough and cache-misses have no effects yet):
import random
l=[random.randint(0,15) for _ in range(1000)]
As baseline, your python-implementation with list:
def packx(it):
n = len(it)//2
r = [0]*n
for i in range(n):
r[i] = (it[i*2]%16)<<4 | it[i*2+1]%16
return r
%timeit packx(l)
143 µs ± 1.95 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
By the way, I use %
instead of //
, which is probably what you want, otherwise you would get only 0
s as result (only lower bits have data in your description).
After cythonizing the same function (with %%cython
-magic) we get a speed-up of around 2:
%timeit packx(l)
77.6 µs ± 1.28 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Let's look at the html produced by option -a
, we see the following for the line corresponding to the for
-loop:
.....
__pyx_t_2 = PyNumber_Multiply(__pyx_v_i, __pyx_int_2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_5 = PyObject_GetItem(__pyx_v_it, __pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_2 = __Pyx_PyInt_RemainderObjC(__pyx_t_5, __pyx_int_16, 16, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 6, __pyx_L1_error)
...
Py_NumberMultiply
means that we use slow python-multiplication, Pyx_DECREF
- all temporaries are slow python-objects. We need to change that!
Let's pass not a list but an array.array
of bytes to our function and return an array.array
of bytes back. Lists have full fledged python objects inside, array.array
the lowly raw c-data which is faster:
%%cython
from cpython cimport array
def cy_apackx(char[::1] it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef array.array res = array.array('b', [])
array.resize(res, n)
for i in range(n):
res.data.as_chars[i] = (it[i*2]%16)<<4 | it[i*2+1]%16
return res
import array
a=array.array('B', l)
%timeit cy_apackx(a)
19.2 µs ± 316 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Better, but let take a look at the generated html, there is still some slow python-code:
__pyx_t_2 = __Pyx_PyInt_From_long(((__Pyx_mod_long((*((char *) ( /* dim=0 */ ((char *) (((char *) __pyx_v_it.data) + __pyx_t_7)) ))), 16) << 4) | __Pyx_mod_long((*((char *) ( /* dim=0 */ ((char *) (((char *) __pyx_v_it.data) + __pyx_t_8)) ))), 16))); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (unlikely(__Pyx_SetItemInt(((PyObject *)__pyx_v_res), __pyx_v_i, __pyx_t_2, unsigned int, 0, __Pyx_PyInt_From_unsigned_int, 0, 0, 1) < 0)) __PYX_ERR(0, 9, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
We still use a python-setter for array (__Pax_SetItemInt
) and for this a python objecct __pyx_t_2
is needed, to avoid this we use array.data.as_chars
:
%%cython
from cpython cimport array
def cy_apackx(char[::1] it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef array.array res = array.array('B', [])
array.resize(res, n)
for i in range(n):
res.data.as_chars[i] = (it[i*2]%16)<<4 | it[i*2+1]%16 ##HERE!
return res
%timeit cy_apackx(a)
1.86 µs ± 30.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Much better, but let's take a look at html again, and we see some calls to __Pyx_RaiseBufferIndexError
- this safety costs some time, so let's switch it off:
%%cython
from cpython cimport array
import cython
@cython.boundscheck(False) # switch of safety-checks
@cython.wraparound(False) # switch of safety-checks
def cy_apackx(char[::1] it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef array.array res = array.array('B', [])
array.resize(res, n)
for i in range(n):
res.data.as_chars[i] = (it[i*2]%16)<<4 | it[i*2+1]%16 ##HERE!
return res
%timeit cy_apackx(a)
1.53 µs ± 11.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
When we look at the generated html, we see:
__pyx_t_7 = (__pyx_v_i * 2);
__pyx_t_8 = ((__pyx_v_i * 2) + 1);
(__pyx_v_res->data.as_chars[__pyx_v_i]) = ((__Pyx_mod_long((*((char *) ( /* dim=0 */ ((char *) (((char *) __pyx_v_it.data) + __pyx_t_7)) ))), 16) << 4) | __Pyx_mod_long((*((char *) ( /* dim=0 */ ((char *) (((char *) __pyx_v_it.data) + __pyx_t_8)) ))), 16));
No python-stuff! Good so far. However, I'm not sure about __Pyx_mod_long
, its definition is:
static CYTHON_INLINE long __Pyx_mod_long(long a, long b) {
long r = a % b;
r += ((r != 0) & ((r ^ b) < 0)) * b;
return r;
}
So C and Python have differences for mod
of negative numbers and it must be taken into account. This function-definition, albeit inlined, will prevent the C-compiler from optimizing a%16
as a&15
. We have only positive numbers, so no need to care about them, thus we need to do the a&15
-trick by ourselves:
%%cython
from cpython cimport array
import cython
@cython.boundscheck(False)
@cython.wraparound(False)
def cy_apackx(char[::1] it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef array.array res = array.array('B', [])
array.resize(res, n)
for i in range(n):
res.data.as_chars[i] = (it[i*2]&15)<<4 | (it[i*2+1]&15)
return res
%timeit cy_apackx(a)
1.02 µs ± 8.63 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
I'm also satified with the resulting C-code/html (only one line):
(__pyx_v_res->data.as_chars[__pyx_v_i]) = ((((*((char *) ( /* dim=0 */ ((char *) (((char *) __pyx_v_it.data) + __pyx_t_7)) ))) & 15) << 4) | ((*((char *) ( /* dim=0 */ ((char *) (((char *) __pyx_v_it.data) + __pyx_t_8)) ))) & 15));
Conclusion: In the sum that means speed up of 140 (140 µs vs 1.02 µs)- not bad! Another interesting point: the calculation itself takes about 2 µs (and that comprises less than optimal bound checking and division) - 138 µs are for creating, registering and deleting temporary python objects.
If you need the upper bits and can assume that lower bits are without dirt (otherwise &250
can help), you can use:
from cpython cimport array
import cython
@cython.boundscheck(False)
@cython.wraparound(False)
def cy_apackx(char[::1] it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef array.array res = array.array('B', [])
array.resize(res, n)
for i in range(n):
res.data.as_chars[i] = it[i*2] | (it[i*2+1]>>4)
return res
%timeit cy_apackx(a)
819 ns ± 8.24 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Another interesting question is which costs have the operations if list is used. If we start with the "improved" version:
%%cython
def cy_packx(it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
res=[0]*n
for i in range(n):
res[i] = it[i*2] | (it[i*2+1]>>4))
return res
%timeit cy_packx(l)
20.7 µs ± 450 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
we see, that reducing the number of integer operation leads to a big speed-up. That is due to the fact, that python-integers are immutable and every operation creates a new temporary object, which is costly. Eliminating operations means also eliminating costly temporaries.
However, it[i*2] | (it[i*2+1]>>4)
is done with python-integer, as next step we make it cdef
-operations:
%%cython
def cy_packx(it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef unsigned char a,b
res=[0]*n
for i in range(n):
a=it[i*2]
b=it[i*2+1] # ensures next operations are fast
res[i]= a | (b>>4)
return res
%timeit cy_packx(l)
7.3 µs ± 880 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
I don't know how it can be improved further, thus we have 7.3 µs for list vs. 1 µs for array.array
.
Last question, what is the costs break down of the list version? I order to avoid being optimized away by the C-compiler, we use a slightly different baseline function:
%%cython
def cy_packx(it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef unsigned char a,b
cdef unsigned char s = 0
res=[0]*n
for i in range(n):
a=it[i*2]
b=it[i*2+1] # ensures next operations are fast
s+=a | (b>>4)
res[i]= s
return res
%timeit cy_packx(l)
In [79]: %timeit cy_packx(l)
7.67 µs ± 106 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
The usage of the s
variable means, it does not get optimized away in the second version:
%%cython
def cy_packx(it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef unsigned char a,b
cdef unsigned char s = 0
res=[0]*n
for i in range(n):
a=it[i*2]
b=it[i*2+1] # ensures next operations are fast
s+=a | (b>>4)
res[0]=s
return res
In [81]: %timeit cy_packx(l)
5.46 µs ± 72.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
About 2 µs or about 30% are the costs for creating new integer objects. What are the costs of the memory allocation?
%%cython
def cy_packx(it):
cdef unsigned int n = len(it)//2
cdef unsigned int i
cdef unsigned char a,b
cdef unsigned char s = 0
for i in range(n):
a=it[i*2]
b=it[i*2+1] # ensures next operations are fast
s+=a | (b>>4)
return s
In [84]: %timeit cy_packx(l)
3.84 µs ± 43.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
That leads to the following performance break down of the list-version:
Time(in µs) Percentage(in %)
all 7.7 100
calculation 1 12
alloc memory 1.6 21
create ints 2.2 29
access data/cast 2.6 38
I must confess, I expected create ints
to play a bigger role and didn't thing accessing the data in the list and casting it to char
s will cost that much.