我是tensorflow的新手,想训练一个逻辑模型进行分类.
# Set model weights
W = tf.Variable(tf.zeros([30, 16]))
b = tf.Variable(tf.zeros([16]))
train_X, train_Y, X, Y = input('train.csv')
#construct model
pred = model(X, W, b)
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(pred), reduction_indices=1))
# Gradient Descent
learning_rate = 0.1
#optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
get_ipython().magic(u'matplotlib inline')
import collections
import matplotlib.pyplot as plt
training_epochs = 200
batch_size = 300
train_X, train_Y, X, Y = input('train.csv')
acc = []
x = tf.placeholder(tf.float32, [None, 30])
y = tf.placeholder(tf.float32, [None, 16])
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.0
#print(type(y_train[0][0]))
print(type(train_X))
print(type(train_X[0][0]))
print X
_, c = sess.run([optimizer, cost], feed_dict = {x: train_X, y: train_Y})
feef_dict方法不起作用,出现以下抱怨:
InvalidArgumentError: You must feed a value for placeholder tensor
‘Placeholder_54′ with dtype float [[Node: Placeholder_54 =
Placeholderdtype=DT_FLOAT, shape=[],
_device=”/job:localhost/replica:0/task:0/cpu:0″]] Caused by op u’Placeholder_54’:
我检查数据类型,以获取训练特征数据X:
train_X type: <type 'numpy.ndarray'>
train_X[0][0]: <type 'numpy.float32'>
train_X size: (300, 30)
place_holder info : Tensor("Placeholder_56:0", shape=(?, 30), dtype=float32)
我不知道为什么会抱怨.希望某人能有所帮助,谢谢
解决方法:
从您的错误消息中,丢失的占位符的名称“ Placeholder_54”很可疑,因为这表明在当前解释器会话中至少创建了54个占位符.
没有足够的细节可以肯定地说,但我有些怀疑.您是否在同一解释器会话中多次运行同一代码(例如,使用IPython / Jupyter或Python Shell)?假设是这种情况,我怀疑您的成本张量取决于该代码的先前执行中创建的占位符.
确实,您的代码在构建了模型的其余部分之后创建了两个tf.placeholder()
张量x和y,因此似乎有可能:
>缺少的占位符是在此代码的先前执行中创建的,或者
> input()函数在内部调用tf.placeholder(),您应该使用这些占位符(也许是张量X和Y?).