0 votes

Why do I get this error in Tensorflow?

import tensorflow as tf
import numpy as np

# Dataset
x_data = np.array([[1.,0.,2], [0.,1.,3], [1.,0.,2], [1.,1.,4]])
y_data = np.array([[3.], [0.], [1.], [2.]])

# Hyperparamters
n_input = 3
n_hidden = 10
n_output = 1
lr = 0.01
epochs = 10000
display_step = 200

# Placeholders 
X = tf.placeholder(tf.float32,[None, n_input ])
Y = tf.placeholder(tf.float32,[None, n_output])

# Weights 
W1 = tf.Variable(tf.random_uniform([n_input, n_hidden] , name="W_layer1"))
W2 = tf.Variable(tf.random_uniform([n_hidden, n_output], name="W_layer2"))

#bias
b1 = tf.Variable(tf.random_normal ([n_hidden]), name="b_layer1") 
b2 = tf.Variable(tf.random_normal ([n_output]), name="b_layer2")

L2 = tf.sigmoid(tf.matmul(X, W1) + b1)
hy = tf.sigmoid(tf.matmul(L2, W2) + b2)

cost = tf.reduce_mean(-Y*tf.log(hy) - (1-Y) * tf.log(1-hy))

optimizer = tf.train.GradientDescentOptimizer(lr).minimize(cost)
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)

       for step in range(epochs):
           _, c = sess.run([optimizer, cost], feed_dict = {X: x_data, Y: y_data})

        if step % 200 == 0:
            print(step, c)

predicted = tf.cast(hy > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
# Accuracy report
h, c, a = sess.run([hy, predicted, accuracy], feed_dict={X: x_data, Y: y_data})
print("\nHypothesis: ", h, "\nCorrect (Y): ", c, "\nAccuracy: ", a)

  #Al correr el codigo tengo este error
  0 -2.18759
  1000 nan
  2000 nan
  3000 nan
  4000 nan
  5000 nan
  6000 nan
  7000 nan
  8000 nan
  9000 nan

 Hypothesis:  [[ nan]
  [ nan]
  [ nan]
  [ nan]] 
 Correct (Y):  [[ 0.]
  [ 0.]
  [ 0.]
  [ 0.]] 
 Accuracy:  0.25

0 votes

I trained an XOR network with output values 0,1,0,1 tried to do the same by changing the output values to larger numbers and I got this error

0voto

Diego Rueda Points 100

Your network is not learning, the learning_rate you use is too big and so it jumps to NaN immediately, if you decrease it you will see the correct output, although you will have other problems that you should check.

0 votes

I increased the learning_rate and got as training result :Hypothesis: [[ 0.95138347] [ 0.96444738] [ 0.95138347] [ 0.98302716]] Correct (Y): [[ 1.] [ 1.] [ 1.] [ 1.] [ 1.] Accuracy: 0.75 But the outpout data are [[2.], [1.], [1.], [1.], [1.]]], why are they close to the value of 1 and not the value of 2?

0 votes

The output of your network is a sigmoid, which goes from 0 to 1 so the outputs will only be in that range, if you want to do regression you must take into account how the last layer changes.

HolaDevs.com

HolaDevs is an online community of programmers and software lovers.
You can check other people responses or create a new question if you don't find a solution

Powered by:

X