0

I have an error that has been baffling me for hours. So basically, I implemented this loss function drawn from an article. With y_true = [0,0,...,1,...,0] and y_pred being my network's output of same size with probabilities to belong to each class. The idea is to add log(y_j) before 1 and add log(1-y_j) after 1. This penalization is useful in one problem I am trying to solve.

Yet I have this annoying error that I don't understand:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-39-0a4261d1fc84> in <module>
     24 ])
     25 
---> 26 model.compile(loss=custom_loss, optimizer='adam', metrics=['mse'])
     27 
     28 model.fit(CT_scan_train[:250], dummy_y[:250], epochs=5)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
    227         #                   loss_weight_2 * output_2_loss_fn(...) +
    228         #                   layer losses.
--> 229         self.total_loss = self._prepare_total_loss(masks)
    230 
    231         # Functions for train, test and predict will

~\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training.py in _prepare_total_loss(self, masks)
    690 
    691                     output_loss = loss_fn(
--> 692                         y_true, y_pred, sample_weight=sample_weight)
    693 
    694                 if len(self.outputs) > 1:

TypeError: __call__() missing 1 required positional argument: 'y_true'

Thanks for your help !

import keras.backend as K
from sklearn.metrics import make_scorer

def loss(y_true, y_pred):
    d = K.argmax(y_true)
    loss = K.sum(K.log(y_pred[:d]))+K.sum(K.log(1-y_pred[d:]))
    return loss

custom_loss = make_scorer(loss)

model = Sequential([
    Conv3D(2,(5,5,5), strides=(1, 1, 1), input_shape=(92, 92, 92, 1)),
    Activation('relu'),
    MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format=None),
    Conv3D(2, (3,3,3), strides=(1, 1, 1)),
    Activation('relu'),
    MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format=None),
    Conv3D(2, (3,3,3), strides=(1, 1, 1)),
    Activation('relu'),
    #Reshape(target_shape)
    Flatten(),
    Dense(64),
    Activation('relu'),
    Dense(32),
    Activation('relu'),
    Dense(5),
    Activation('relu'),
])

model.compile(loss=custom_loss, optimizer='adam')

model.fit(X, y, epochs=5)

For more information about make_scorer, see: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html

Chiekh
  • 9
  • 2
  • Please add the full traceback, error messages in isolation are not really useful. – Dr. Snoopy Jan 30 '20 at 22:44
  • Please also show the code for `make_scorer`. You should post a minimal, reproducible example. – jakub Jan 30 '20 at 23:58
  • make_scorer is a sklearn function: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html. I will add the full traceback. – Chiekh Jan 31 '20 at 08:40
  • I think you can directly provide your *loss* function to model.compile. *make_scorer* seems to be useful for GridSearchCV. – Kalpit Jan 31 '20 at 08:56

1 Answers1

0

check the answer here.

You don't need to use make_scorer for this, you can directly give your loss function to model.compile

Hope it helps

codeblaze
  • 73
  • 8