0

I'm novice to deep learning.I use tensorflow to construct my TextCNN model(two categories) referring this tutorial.
This model can predict the categories of the text. But I want a score (continuous value in [0,1]) rather than the discrete value. For example, If the model give 0.77, the text is more likely one of the category; if it gives 1.0, the text is actually that category.
This is the part of my code.

def cnn(self):
    # word embedding
    with tf.device('/cpu:0'):
        embedding = tf.get_variable('embedding', [self.config.vocab_size, self.config.embedding_dim])
        embedding_inputs = tf.nn.embedding_lookup(embedding, self.input_x)

    with tf.name_scope("cnn"):
        # CNN layer
        conv = tf.layers.conv1d(embedding_inputs, self.config.num_filters, self.config.kernel_size, name='conv')
        # global max pooling layer
        gmp = tf.reduce_max(conv, reduction_indices=[1], name='gmp')

    with tf.name_scope("score"):
        # full connected layer
        fc = tf.layers.dense(gmp, self.config.hidden_dim, name='fc1')
        fc = tf.contrib.layers.dropout(fc, self.keep_prob)
        fc = tf.nn.relu(fc)

        # classification
        self.logits = tf.layers.dense(fc, self.config.num_classes, name='fc2')
        self.y_pred_cls = tf.argmax(tf.nn.softmax(self.logits), 1)  # 预测类别

    with tf.name_scope("optimize"):
        # Loss function, cross entropy
        cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.input_y)
        self.loss = tf.reduce_mean(cross_entropy)
        # optimizer
        self.optim = tf.train.AdamOptimizer(learning_rate=self.config.learning_rate).minimize(self.loss)

    with tf.name_scope("accuracy"):
        # accuracy
        correct_pred = tf.equal(tf.argmax(self.input_y, 1), self.y_pred_cls)
        self.acc = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

Thanks in advance.

1 Answers1

0

Use tf.nn.softmax(self.logits) to get probabilistic scores. Also see this question: What is logits, softmax and softmax_cross_entropy_with_logits?

dgumo
  • 1,568
  • 11
  • 17
  • Thanks a lot!
    I add "print(tf.nn.softmax(self.logits))" after I get "self.logits", and I got:
    Tensor("score/fc2/BiasAdd:0", shape=(?, 2), dtype=float32).
    Could you tell me how to print the probabilistic scores rather than the "Tensor"?
    – Shuitian Wei Jul 15 '18 at 08:47
  • Assuming that you created a tensorflow session object,e.g. `sess=tf.Session()` you need to call `sess.run(tf.nn.softmax(self.logits))` – dgumo Jul 15 '18 at 16:29