Create a custom layer to hold the trainable parameter. This layer will not return the inputs in its call, but we are going to have the inputs for complying with how you create layers.
class TrainableLossLayer(Layer):
def __init__(self, a_initializer, **kwargs):
super(TrainableLossLayer, self).__init__(**kwargs)
self.a_initializer = keras.initializers.get(a_initializer)
#method where weights are defined
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel_a',
shape=(1,),
initializer=self.a_initializer,
trainable=True)
self.built=True
#method to define the layers operation (only return the weights)
def call(self, inputs):
return self.kernel
#output shape
def compute_output_shape(self, input_shape):
return (1,)
Use the layer in your model to get a
with any inputs (this is not compatible with a Sequential model):
a = TrainableLossLayer(a_init, name="somename")(anyInput)
Now, you can try to define your loss in a sort of ugly way:
def customLoss(yTrue,yPred):
return (K.log(yTrue) - K.log(yPred))**2+a*yPred
If this works, then it's ready.
You can also try a more complicated model (if you don't want to use a
in the loss jumping over the layers like that, this might cause problems in model saving/loading)
In this case, you will need that y_train
goes in as an input instead of an output:
y_true_inputs = Input(...)
Your loss function will go into a Lambda
layer taking all parameters properly:
def lambdaLoss(x):
yTrue, yPred, alpha = x
return (K.log(yTrue) - K.log(yPred))**2+alpha*yPred
loss = Lambda(lambdaLoss)([y_true_inputs, original_model_outputs, a])
Your model will output this loss:
model = Model([original_model_inputs, y_true_inputs], loss)
You will have a dummy loss function:
def dummyLoss(true, pred):
return pred
model.compile(loss = dummyLoss, ...)
And train as:
model.fit([x_train, y_train], anything_maybe_None_or_np_zeros ,....)