95

I'm playing with the reuters-example dataset and it runs fine (my model is trained). I read about how to save a model, so I could load it later to use again. But how do I use this saved model to predict a new text? Do I use models.predict()?

Do I have to prepare this text in a special way?

I tried it with

import keras.preprocessing.text

text = np.array(['this is just some random, stupid text'])
print(text.shape)

tk = keras.preprocessing.text.Tokenizer(
        nb_words=2000,
        filters=keras.preprocessing.text.base_filter(),
        lower=True,
        split=" ")

tk.fit_on_texts(text)
pred = tk.texts_to_sequences(text)
print(pred)

model.predict(pred)

But I always get

(1L,)
[[2, 4, 1, 6, 5, 7, 3]]
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-83-42d744d811fb> in <module>()
      7 print(pred)
      8 
----> 9 model.predict(pred)

C:\Users\bkey\Anaconda2\lib\site-packages\keras\models.pyc in predict(self, x, batch_size, verbose)
    457         if self.model is None:
    458             self.build()
--> 459         return self.model.predict(x, batch_size=batch_size, verbose=verbose)
    460 
    461     def predict_on_batch(self, x):

C:\Users\bkey\Anaconda2\lib\site-packages\keras\engine\training.pyc in predict(self, x, batch_size, verbose)
   1132         x = standardize_input_data(x, self.input_names,
   1133                                    self.internal_input_shapes,
-> 1134                                    check_batch_dim=False)
   1135         if self.stateful:
   1136             if x[0].shape[0] > batch_size and x[0].shape[0] % batch_size != 0:

C:\Users\bkey\Anaconda2\lib\site-packages\keras\engine\training.pyc in standardize_input_data(data, names, shapes, check_batch_dim, exception_prefix)
     79     for i in range(len(names)):
     80         array = arrays[i]
---> 81         if len(array.shape) == 1:
     82             array = np.expand_dims(array, 1)
     83             arrays[i] = array

AttributeError: 'list' object has no attribute 'shape'

Do you have any recommendations as to how to make predictions with a trained model?

gobrewers14
  • 13,949
  • 10
  • 36
  • 57
ben
  • 1,133
  • 2
  • 9
  • 10

6 Answers6

61

model.predict() expects the first parameter to be a numpy array. You supply a list, which does not have the shape attribute a numpy array has.

Otherwise your code looks fine, except that you are doing nothing with the prediction. Make sure you store it in a variable, for example like this:

prediction = model.predict(np.array(tk.texts_to_sequences(text)))
print(prediction)
nemo
  • 47,577
  • 10
  • 122
  • 123
11
model.predict_classes(<numpy_array>)

Sample https://gist.github.com/alexcpn/0683bb940cae510cf84d5976c1652abd

Alex Punnen
  • 3,420
  • 36
  • 52
5

You must use the same Tokenizer you used to build your model!

Else this will give different vector to each word.

Then, I am using:

phrase = "not good"
tokens = myTokenizer.texts_to_matrix([phrase])

model.predict(np.array(tokens))
Thomas Decaux
  • 18,451
  • 2
  • 83
  • 95
3

You can just "call" your model with an array of the correct shape:

model(np.array([[6.7, 3.3, 5.7, 2.5]]))

Full example:

from sklearn.datasets import load_iris
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
import numpy as np

X, y = load_iris(return_X_y=True)

model = Sequential([
    Dense(16, activation='relu'),
    Dense(32, activation='relu'),
    Dense(1)])

model.compile(loss='mean_absolute_error', optimizer='adam')

history = model.fit(X, y, epochs=10, verbose=0)

print(model(np.array([[6.7, 3.3, 5.7, 2.5]])))
<tf.Tensor: shape=(1, 1), dtype=float64, numpy=array([[1.92517677]])>
Nicolas Gervais
  • 21,923
  • 10
  • 61
  • 96
1

I trained a neural network in Keras to perform non linear regression on some data. This is some part of my code for testing on new data using previously saved model configuration and weights.

fname = r"C:\Users\tauseef\Desktop\keras\tutorials\BestWeights.hdf5"
modelConfig = joblib.load('modelConfig.pkl')
recreatedModel = Sequential.from_config(modelConfig)
recreatedModel.load_weights(fname)
unseenTestData = np.genfromtxt(r"C:\Users\tauseef\Desktop\keras\arrayOf100Rows257Columns.txt",delimiter=" ")
X_test = unseenTestData
standard_scalerX = StandardScaler()
standard_scalerX.fit(X_test)
X_test_std = standard_scalerX.transform(X_test)
X_test_std = X_test_std.astype('float32')
unseenData_predictions = recreatedModel.predict(X_test_std)
tauseef_CuriousGuy
  • 682
  • 1
  • 10
  • 26
1

Your can use your tokenizer and pad sequencing for a new piece of text. This is followed by model prediction. This will return the prediction as a numpy array plus the label itself.

For example:

new_complaint = ['Your service is not good']
seq = tokenizer.texts_to_sequences(new_complaint)
padded = pad_sequences(seq, maxlen=maxlen)
pred = model.predict(padded)
print(pred, labels[np.argmax(pred)])
Taie
  • 391
  • 5
  • 19