I am perplexed by the API to scipy.ndimage.interpolation.affine_transform
. And judging by this issue I'm not the only one. I'm actually wanting to do more interesting things with affine_transform
than just rotating an image, but a rotation would do for starters. (And yes I'm well aware of scipy.ndimage.interpolation.rotate
, but figuring out how to drive affine_transform
is what interests me here).
When I want to do this sort of thing in systems like OpenGL, I'm think in terms of computing the transform which applies a 2x2 rotation matrix R
about a centre c
, and therefore thinking of points p
being transformed (p-c)R+c
= pR+c-cR
, which gives a c-cR
term to be used as the translation component of a transform. However, according to the issue above, scipy's affine_transform
does "offset first" so we actually need to compute an offset s
such that (p-c)R+c=(p+s)R
which with a bit of rearrangement gives s=(c-cR)R'
where R'
is the inverse of R
.
If I plug this into an ipython notebook (pylab mode; code below maybe needs some additional imports):
img=scipy.misc.lena()
#imshow(img,cmap=cm.gray);show()
centre=0.5*array(img.shape)
a=15.0*pi/180.0
rot=array([[cos(a),sin(a)],[-sin(a),cos(a)]])
offset=(centre-centre.dot(rot)).dot(linalg.inv(rot))
rotimg=scipy.ndimage.interpolation.affine_transform(
img,rot,order=2,offset=offset,cval=0.0,output=float32
)
imshow(rotimg,cmap=cm.gray);show()
I get
which unfortunately isn't rotated about the centre.
So what's the trick I'm missing here?