If you fit
an sklearn.preprocessing.LabelEncoder
with labels of type int
, for some reason during inverse_transform
it returns numpy.int64
type labels.
from sklearn.preprocessing import LabelEncoder
labels = [2,4,6] # just a list of `int`s
e = LabelEncoder().fit(labels)
encoded = e.transform([4,6,2])
decoded = e.inverse_transform(encoded)
type(decoded[0])
# returns <class 'numpy.int64'>
So I guess I have 2 questions
- Why would it do that?
- How can someone avoid that without custom code?
(I fell on this problem when Flask's jsonify
could not marshal np.int64
to JSON)