Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

闲篇不扯,关于Mnist,Tensorflow的简介请自己看博客。

操作系统:Windows10

Python环境:3.6,并已经安装了numpy,Tensorflow等必备库

问题:网上很多文章了,现在修改的点就是使用本地已经下载的Mnist学习数据,而不再去下载。

第一步:下载Mnist数据,Mnist数据下载地址:http://yann.lecun.com/exdb/mnist/

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

下方红色的四个文件就是下载的文件。下载后保存,我的保存地址是D:\python\Python36\testdata

第二步:新建数据装载程序,也就是Input_data.py文件,我是把全部的input_data文件代码复制过来的。我复制的input_data文件地址是:http://blog.csdn.net/FANGPINLEI/article/details/51790284

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

第三步:修改复制代码改为本地数据源读取

1.一些库文件导入的差别:

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

改为

import urllib
#from six.moves import xrange  # pylint: disable=redefined-builtin
2.增加本地数据源路径的变量localpath



3.因为我们不使用xrange,所以将Input_data.py文件中的next_batch函数的范围判断改为range方法(python3不再使用xrange)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)


Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

4.修改Input_data.py的 read_data_sets函数,不再下载文件而是使用本地数据源

1)文件名前面+\\符号


Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

2.不再使用maybe_download函数去获取地址,直接使用localpath+filename获取数据文件

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

第四步:编写测试代码,新建一个py文件,我的文件是test_minst.py,还是在http://blog.csdn.net/FANGPINLEI/article/details/51790284,复制代码

只需要修改一处,将init初始化tensorflow方法改为

init = tf.global_variables_initializer()

第五步:debug一下,搞定

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

Windows环境下本地数据源Mnist的Tensorflow实例(Python3.6)

代码:

input_data.py

#coding=utf-8
"""Functions for downloading and reading MNIST data."""
#2017-10-08将文件下载改写为本地读取
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gzip
import os
import tensorflow.python.platform
import numpy
import urllib
#from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf
#本地minst数据地址
localpath =r'D:\python\Python36\testdata'
SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'

#该方法在此案例不使用,保留
def maybe_download(filename, work_directory):
  """Download the data from Yann's website, unless it's already here."""
  if not os.path.exists(work_directory):
    os.mkdir(work_directory)
  filepath = os.path.join(work_directory, filename)
  if not os.path.exists(filepath):
    filepath, _ = urllib.request.urlretrieve(SOURCE_URL + filename, filepath)
    statinfo = os.stat(filepath)
    print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
  return filepath


def _read32(bytestream):
  dt = numpy.dtype(numpy.uint32).newbyteorder('>')
  return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]


def extract_images(filename):
  """Extract the images into a 4D uint8 numpy array [index, y, x, depth]."""
  print('Extracting', filename)
  with gzip.open(filename) as bytestream:
    magic = _read32(bytestream)
    if magic != 2051:
      raise ValueError(
          'Invalid magic number %d in MNIST image file: %s' %
          (magic, filename))
    num_images = _read32(bytestream)
    rows = _read32(bytestream)
    cols = _read32(bytestream)
    buf = bytestream.read(rows * cols * num_images)
    data = numpy.frombuffer(buf, dtype=numpy.uint8)
    data = data.reshape(num_images, rows, cols, 1)
    return data


def dense_to_one_hot(labels_dense, num_classes=10):
  """Convert class labels from scalars to one-hot vectors."""
  num_labels = labels_dense.shape[0]
  index_offset = numpy.arange(num_labels) * num_classes
  labels_one_hot = numpy.zeros((num_labels, num_classes))
  labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
  return labels_one_hot


def extract_labels(filename, one_hot=False):
  """Extract the labels into a 1D uint8 numpy array [index]."""
  print('Extracting', filename)
  with gzip.open(filename) as bytestream:
    magic = _read32(bytestream)
    if magic != 2049:
      raise ValueError(
          'Invalid magic number %d in MNIST label file: %s' %
          (magic, filename))
    num_items = _read32(bytestream)
    buf = bytestream.read(num_items)
    labels = numpy.frombuffer(buf, dtype=numpy.uint8)
    if one_hot:
      return dense_to_one_hot(labels)
    return labels
class DataSet(object):
  def __init__(self, images, labels, fake_data=False, one_hot=False,
               dtype=tf.float32):
    """Construct a DataSet.
    one_hot arg is used only if fake_data is true.  `dtype` can be either
    `uint8` to leave the input as `[0, 255]`, or `float32` to rescale into
    `[0, 1]`.
    """
    dtype = tf.as_dtype(dtype).base_dtype
    if dtype not in (tf.uint8, tf.float32):
      raise TypeError('Invalid image dtype %r, expected uint8 or float32' %
                      dtype)
    if fake_data:
      self._num_examples = 10000
      self.one_hot = one_hot
    else:
      assert images.shape[0] == labels.shape[0], (
          'images.shape: %s labels.shape: %s' % (images.shape,
                                                 labels.shape))
      self._num_examples = images.shape[0]
      # Convert shape from [num examples, rows, columns, depth]
      # to [num examples, rows*columns] (assuming depth == 1)
      assert images.shape[3] == 1
      images = images.reshape(images.shape[0],
                              images.shape[1] * images.shape[2])
      if dtype == tf.float32:
        # Convert from [0, 255] -> [0.0, 1.0].
        images = images.astype(numpy.float32)
        images = numpy.multiply(images, 1.0 / 255.0)
    self._images = images
    self._labels = labels
    self._epochs_completed = 0
    self._index_in_epoch = 0
  @property
  def images(self):
    return self._images
  @property
  def labels(self):
    return self._labels
  @property
  def num_examples(self):
    return self._num_examples
  @property
  def epochs_completed(self):
    return self._epochs_completed
  def next_batch(self, batch_size, fake_data=False):
    """Return the next `batch_size` examples from this data set."""
    if fake_data:
      fake_image = [1] * 784
      if self.one_hot:
        fake_label = [1] + [0] * 9
      else:
        fake_label = 0
      return [fake_image for _ in range(batch_size)], [
          fake_label for _ in range(batch_size)]
    start = self._index_in_epoch
    self._index_in_epoch += batch_size
    if self._index_in_epoch > self._num_examples:
      # Finished epoch
      self._epochs_completed += 1
      # Shuffle the data
      perm = numpy.arange(self._num_examples)
      numpy.random.shuffle(perm)
      self._images = self._images[perm]
      self._labels = self._labels[perm]
      # Start next epoch
      start = 0
      self._index_in_epoch = batch_size
      assert batch_size <= self._num_examples
    end = self._index_in_epoch
    return self._images[start:end], self._labels[start:end]

def read_data_sets(train_dir, fake_data=False, one_hot=False, dtype=tf.float32):
  class DataSets(object):
    pass
  data_sets = DataSets()
  if fake_data:
    def fake():
      return DataSet([], [], fake_data=True, one_hot=one_hot, dtype=dtype)
    data_sets.train = fake()
    data_sets.validation = fake()
    data_sets.test = fake()
    return data_sets
  TRAIN_IMAGES = '\\train-images-idx3-ubyte.gz'
  TRAIN_LABELS = '\\train-labels-idx1-ubyte.gz'
  TEST_IMAGES = '\\t10k-images-idx3-ubyte.gz'
  TEST_LABELS = '\\t10k-labels-idx1-ubyte.gz'
  VALIDATION_SIZE = 5000
  #改为读取本地文件,不再调用Maybe_download文件
  #local_file = maybe_download(TRAIN_IMAGES, train_dir)
  local_file = localpath+TRAIN_IMAGES
  train_images = extract_images(local_file)
  #local_file = maybe_download(TRAIN_LABELS, train_dir)
  local_file = localpath+TRAIN_LABELS
  train_labels = extract_labels(local_file, one_hot=one_hot)
  #local_file = maybe_download(TEST_IMAGES, train_dir)
  local_file = localpath+TEST_IMAGES
  test_images = extract_images(local_file)
  #local_file = maybe_download(TEST_LABELS, train_dir)
  local_file = localpath+TEST_LABELS
  test_labels = extract_labels(local_file, one_hot=one_hot)
  validation_images = train_images[:VALIDATION_SIZE]
  validation_labels = train_labels[:VALIDATION_SIZE]
  train_images = train_images[VALIDATION_SIZE:]
  train_labels = train_labels[VALIDATION_SIZE:]
  data_sets.train = DataSet(train_images, train_labels, dtype=dtype)
  data_sets.validation = DataSet(validation_images, validation_labels,dtype=dtype)
  data_sets.test = DataSet(test_images, test_labels, dtype=dtype)
  return data_sets


test_mnist.py

# coding=utf-8
# File Name:test_mnist.py.py
# Author: weironghao


import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

# soft回归模型
x = tf.placeholder("float", [None, 784])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x,W) + b)



# 训练模型
y_ = tf.placeholder("float", [None,10])
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

# 评估模型
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))