Keras:CNN多类分类器
从Keras的官方二进制分类示例(请参阅here)开始,我实现了一个Tensorflow作为后端的多类分类器。 在这个例子中,有两个类(狗/猫),我现在有50个类,并且数据以相同的方式存储在文件夹中。Keras:CNN多类分类器
训练时,损失不会降低,准确性不会提高。 我改变了使用sigmoid
函数的最后一层,使用softmax
,将binary_crossentropy
更改为categorical_crossentropy
,并将class_mode
更改为categorical
。
这里是我的代码:
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
import keras.optimizers
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# dimensions of our images.
img_width, img_height = 224, 224
train_data_dir = 'images/train'
validation_data_dir = 'images/val'
nb_train_samples = 209222
nb_validation_samples = 40000
epochs = 50
batch_size = 16
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(50))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
train_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
directory=train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = train_datagen.flow_from_directory(
directory=validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
model.save_weights('weights.h5')
上,我可能是错的任何想法? 任何输入将不胜感激!
编辑: 作为提问者@RobertValencia,这里有最新的训练日志的开头:
Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.7.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.7.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.7.5 locally
Found 3517 images belonging to 50 classes.
<keras.preprocessing.image.DirectoryIterator object at 0x7fd1d4515c10>
Found 2451 images belonging to 50 classes.
Epoch 1/50
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GRID K520
major: 3 minor: 0 memoryClockRate (GHz) 0.797
pciBusID 0000:00:03.0
Total memory: 3.94GiB
Free memory: 3.91GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GRID K520, pci bus id: 0000:00:03.0)
8098/13076 [=================>............] - ETA: 564s - loss: 15.6869 - categorical_accuracy: 0.0267
考虑您需要区分,或许增加了模型的复杂类的数量,以及因为使用不同的优化器可能会产生更好的结果。尝试使用这种模式,这是部分基于该VGG-16 CNN架构,但不复杂:
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(1024, activation='relu'))
model.add(Dense(50, activation='softmax'))
optimizer = Nadam(lr=0.002,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-08,
schedule_decay=0.004)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['categorical_accuracy'])
如果你得到更好的效果,我建议寻找到VGG-16型号:
谢谢我会尝试,但它似乎是那种可以从80%准确率到95%但不是从3%到95的模型。但是我会绝对在几个小时内尝试它,让你知道 –
好吧。让我们知道,如果你得到更好的结果。我也很好奇它的结果。 –
什么是与您的优化设置?为什么如此小的势头以及为什么禁用涅斯特罗夫势头? – nemo
@nemo谢谢,在这里复制错误优化。刚刚编辑。但是我遇到了'optimizer = SGD(lr = 0.01,decay = 1e-6,momentum = 0.9,nesterov = True)'(如在帖子中编辑) –
这个问题你有没有适合每个类的目录结构?确保训练生成器生成的(x,y)对是正确的(尝试在火车发电机上调用next(),并查看结果)。 – nemo