InvalidArgumentError上SOFTMAX在tensorflow
问题描述:
我有以下功能:InvalidArgumentError上SOFTMAX在tensorflow
def forward_propagation(self, x):
# The total number of time steps
T = len(x)
# During forward propagation we save all hidden states in s because need them later.
# We add one additional element for the initial hidden, which we set to 0
s = tf.zeros([T+1, self.hidden_dim])
# The outputs at each time step. Again, we save them for later.
o = tf.zeros([T, self.word_dim])
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.placeholder(tf.float32)
s_t = tf.nn.tanh(a + tf.reduce_sum(tf.multiply(b, c)))
o_t = tf.nn.softmax(tf.reduce_sum(tf.multiply(a, b)))
# For each time step...
with tf.Session() as sess:
s = sess.run(s)
o = sess.run(o)
for t in range(T):
# Note that we are indexing U by x[t]. This is the same as multiplying U with a one-hot vector.
s[t] = sess.run(s_t, feed_dict={a: self.U[:, x[t]], b: self.W, c: s[t-1]})
o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]})
return [o, s]
self.U,self.V,和self.W是numpy的阵列。我试图让SOFTMAX上
o_t = tf.nn.softmax(tf.reduce_sum(tf.multiply(a, b)))
图,它给我的错误在这条线:
o[t] = sess.run(o_t, feed_dict={a: self.V, b: s[t]})
的错误是:
InvalidArgumentError (see above for traceback): Expected begin[0] == 0 (got -1) and size[0] == 0 (got 1) when input.dim_size(0) == 0
[[Node: Slice = Slice[Index=DT_INT32, T=DT_INT32, _device="/job:localhost/replica:0/task:0/cpu:0"](Shape_1, Slice/begin, Slice/size)]]
如何,我应该得到SOFTMAX在tensorflow?
答
问题出现是因为您在tf.nn.softmax
的参数上致电tf.reduce_sum
。结果,softmax函数失败,因为标量不是有效的输入参数。您的意思是使用tf.matmul
而不是tf.reduce_sum
和tf.multiply
的组合吗?
编辑:开箱即用的Tensorflow不提供等效的np.dot
。如果要计算矩阵和向量的点积,则需要明确地总和索引:
# equivalent to np.dot(a, b) if a.ndim == 2 and b.ndim == 1
c = tf.reduce_sum(a * b, axis=1)
我正在尝试制作a和b的点生成。 – yusuf
在这种情况下,您应该使用'tf.matmul'(如果两个参数都是矩阵),或者您必须指定要求和的轴。例如,如果'a'具有形状'(n,k)'而'b'具有形状'(k,)',则可以使用'tf.reduce_sum(a * b,axis = 1)'计算点积。 –
tm matmul给了我这个错误:形状必须是等级2,但是'MatMul'(op:'MatMul')的等级为1,输入形状为[8000,100],[100]。我已经使用了o [t] .assign(tf.nn.softmax(tf.matmul(self.V,s [t]))) – yusuf