I'm trying to apply the expert portion of the tutorial to my own data but I keep running into dimension errors. Here's the code leading up to the error.
我试图将教程的专家部分应用到我自己的数据中,但是我一直在运行维度错误。这是导致错误的代码。
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([1, 8, 1, 4])
b_conv1 = bias_variable([4])
x_image = tf.reshape(tf_in, [-1,2,8,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
And then when I try to run this command:
当我试着运行这个命令时
W_conv2 = weight_variable([1, 4, 4, 8])
b_conv2 = bias_variable([8])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
I get the following errors:
我有以下错误:
ValueError Traceback (most recent call last)
<ipython-input-41-7ab0d7765f8c> in <module>()
3
4 h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
----> 5 h_pool2 = max_pool_2x2(h_conv2)
ValueError: ('filter must not be larger than the input: ', 'Filter: [', Dimension(2), 'x', Dimension(2), '] ', 'Input: [', Dimension(1), 'x', Dimension(4), '] ')
Just for some background information, the data that I'm dealing with is a CSV file where each row contains 10 features and 1 empty column that can be a 1 or a 0. What I'm trying to get is a probability in the empty column that the column will equal a 1.
对于一些背景信息,我处理的数据是一个CSV文件,其中每行包含10个特性,1个空列可以是1或0。我想要得到的是在空列中这个列等于1的概率。
4
You have to shape the input so it is compatible with both the training tensor and the output. If you input is length 1, your output should be length 1 (length is substituted for dimension).
你必须对输入进行调整,这样它就能与训练张量和输出相匹配。如果输入的长度是1,那么输出长度应该是1(长度被替换为维度)。
When you're dealing with-
当你处理
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 1, 1, 1],
strides=[1, 1, 1, 1], padding='SAME')
Notice how I changed the strides and the ksize to [1, 1, 1, 1]
. This will match an output to a 1 dimensional input and prevent errors down the road.
注意,我是如何改变步幅和ksize的,以[1,1,1,1]。这将使输出与一维输入匹配,并防止错误发生。
When you're defining your weight variable (see code below)-
当你定义你的体重变量(见下面的代码)-。
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
you're going to have to make the first 2 numbers conform to the feature tensor that you are using to train your model, the last two numbers will be the dimension of the predicted output (same as the dimension of the input).
你必须使前两个数字符合你用来训练模型的特征张量,最后两个数字将是预测输出的维数(与输入的维数相同)。
W_conv1 = weight_variable([1, 10, 1, 1])
b_conv1 = bias_variable([1])
Notice the [1, 10,
in the beginning which signifies that the feature tensor is going to be a 1x10 feature tensor; the last two numbers 1, 1]
correspond to the dimensions of the input and output tensors/predictors.
注意到[1,10,在开始的时候这意味着特征张量将是一个1x10的特征张量;最后两个数字(1,1)对应于输入和输出张量/预测值的维数。
When you reshape your x_foo tensor (I call it x_ [x prime]), you, for whatever reason, have to define it like so-
当你重塑你的x_foo张量(我叫它x_ [x '])时,不管出于什么原因,你都必须这样定义它。
x_ = tf.reshape(x, [-1,1,10,1])
Notice the 1 and 10 in the middle- ...1,10,...
. Once again, these numbers correspond to the dimension of your feature tensor.
注意中间的1和10,1,10,再一次,这些数字对应于特征张量的维数。
For every bias variable, you choose the final number of the previously defined variable. For example, if W_conv1 = weight_variable([1, 10, 1, 1])
appears like so, you take the final number and put that into your bias variable so it can match the dimensions of the input. This is done like so- b_conv1 = bias_variable([1])
.
对于每个偏置变量,您选择前面定义的变量的最终数量。例如,如果W_conv1 =权重变量([1,10,1,1])看起来是这样的,那么您将最后一个数字放到您的偏置变量中,这样它就可以匹配输入的维度。这是像这样做的,[1]。
If you need any more explanation please comment below.
如果你需要更多的解释,请在下面评论。
3
The dimensions you are using for the filter are not matching the output of the hidden layer.
您所使用的过滤器的尺寸与隐藏层的输出不匹配。
Let me see if I understood you: your input is composed of 8 features, and you want to reshape it into a 2x4 matrix, right?
让我看看我是否理解你:你的输入是由8个特征组成的,你想把它改造成一个2x4矩阵,对吧?
The weights you created with weight_variable([1, 8, 1, 4])
expect a 1x8 input, in one channel, and produce a 1x8 output in 4 channels (or hidden units). The filter you are using sweeps the input in 2x2 squares. However, since the result of the weights is 1x8, they won't match.
使用权重变量([1,8,1,4])创建的权重期望在一个通道中输入1x8,并在4个通道(或隐藏单元)中生成1x8的输出。您使用的筛选器将对2x2方的输入进行扫描。但是,由于权重的结果是1x8,所以它们不会匹配。
You should reshape the input as
你应该重塑输入。
x_image = tf.reshape(tf_in, [-1,2,4,1])
Now, your input is actually 2x4 instead of 1x8. Then you need to change the weight shape to (2, 4, 1, hidden_units)
to deal with a 2x4 output. It will also produce a 2x4 output, and the 2x2 filter now can be applied.
你的输入实际上是2x4而不是1x8。然后需要将权重形状更改为(2、4、1、hidden_units)来处理2x4输出。它还将产生一个2x4的输出,现在可以应用2x2过滤器。
After that, the filter will match the output of the weights. Also note that you will have to change the shape of your second weight matrix to weight_variable([2, 4, hidden_units, hidden2_units])
然后,过滤器将匹配权重的输出。还要注意,您必须将第二个权重矩阵的形状更改为权重变量([2,4,hidden_units, hidden2_units])
本站翻译的文章,版权归属于本站,未经许可禁止转摘,转摘请注明本文地址:http://www.silva-art.net/blog/2015/12/04/963a6b6e05852904a6b9f6f02ef21724.html。