tensorflow实现手写数字识别

mac2024-07-28  53

写在前面的话

文章总结自北京大学曹健老师的mooc,点击查看课程,通过学习,我自己撸了一遍代码,现在放在下面。并写了一些自己的认识

环境版本

tensorflow==1.3.0

python==3.5.6

操作系统为ubuntu16

手写数字识别的思路

1、首先,我们用tensorflow官方给出的mnist数据集来训练出一个模型 2、其次,我们用tensorflow官方给出的mnist数据集来测试这个模型的精度 3、将我们自己手写的数字图片喂入神经网络,得出预测结果

代码思路

1、我们先设计一个前向传播网络,这个py文件中定义神经络的输入、参数和输出,定义前向传播过程 get_weight为神经元w参数生成函数 get_bias为偏置b生成函数 forward复现了网络的结构

#0导入模块 import tensorflow as tf INPUT_NODE = 784 OUTPUT_NODE = 10 LAYER1_NODE = 500 # #定义神经络的输入、参数和输出,定义前向传播过程 def get_weight(shape, regularizer): w = tf.Variable(tf.truncated_normal(shape, stddev=0.1))#截断的产生正态分布的随机数,即随机数与均值的差值若大于两倍的标准差,则重新生成。stddev:标准差 if regularizer != None: tf.add_to_collection('losses', tf.contrib.layers.l2_regularizer(regularizer)(w))#正则化给每个w加权重 return w def get_bias(shape): b = tf.Variable(tf.zeros(shape)) return b def forward(x, regularizer): w1 = get_weight([INPUT_NODE, LAYER1_NODE], regularizer) b1 = get_bias([LAYER1_NODE]) y1 = tf.nn.relu(tf.matmul(x, w1) + b1)#非线性函数relu的输出 w2 = get_weight([LAYER1_NODE, OUTPUT_NODE], regularizer) b2 = get_bias([OUTPUT_NODE]) y = tf.matmul(y1, w2) + b2 #输出层不过激活函数,因为要保证概率分布的均匀? return y

2、在forward.py文件中,我们定义了神经网络的一些个参数,解释如代码注释,其中我们定义了代码的反向传播纠正参数的结构。定义了ckpt来实现断点续训,并在控制台输出了训练的情况即损失 值的大小。

#0导入模块,生成模拟数据集 import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data import mnist_forward import os BATCH_SIZE = 200#每轮训练喂入数据的量 LEARNING_RATE_BASE = 0.1#学习率 LEARNING_RATE_DECAY = 0.99#指数衰减率 REGULARIZE = 0.0001#正则化权重 STEPS = 50000 MOVING_AVERAGE_DECAY = 0.99#衰减因子 MODEL_SAVE_PATH = "./model/" MODEL_NAME = "mnist_model" def backward(mnist): x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) y = mnist_forward.forward(x, REGULARIZE)#复现网络结构,推算出预测值 global_step = tf.Variable(0, trainable=False) #定义损失函数,引入了正则化的损失函数,并和交叉熵一起使用 ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))#计算logits和labels之间的稀疏softmax交叉熵 cem = tf.reduce_mean(ce) loss = cem + tf.add_n(tf.get_collection('losses')) #定义指数衰减学习率 learning_rate = tf.train.exponential_decay( LEARNING_RATE_BASE, global_step, mnist.train.num_examples/BATCH_SIZE, LEARNING_RATE_DECAY, staircase=True)#如果staircase=True,那就表明每decay_steps次计算学习速率变化,更新原始学习速率,如果是False,那就是每一步都更新学习速率 #定义反向传播方法:包涵正则化 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) #滑动平均值 ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) ema_op = ema.apply(tf.trainable_variables()) with tf.control_dependencies([train_step, ema_op]): train_op = tf.no_op(name='train') saver = tf.train.Saver() with tf.Session() as sess: init_op = tf.global_variables_initializer() sess.run(init_op) ckpt = tf.train.get_checkpoint_state(MODEL_SAVE_PATH) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) for i in range(STEPS): xs, ys = mnist.train.next_batch(BATCH_SIZE) _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs,y_: ys}) if i % 1000 == 0: print("After %d training step(s), loss on training batch is %g."%(step, loss_value)) saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) def main(): mnist = input_data.read_data_sets("./data/", one_hot=True) backward(mnist) if __name__ == '__main__': main()

3、以上神经网络的结构就算实现了,我们既然已经得到了这个模型,就肯定想知道这个模型是否可以解决我们现在要面临的问题,我们编写一个测试代码来检查神经网络的模型精度。测试集代码也是tensorflow官方给出的mnist数据集。我们复现这个网络,并测试效果,基本上一轮之后你就能观察到精确率达到了90%+。

import time import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data import mnist_forward import mnist_backward TEST_INTNERVAL_SECS = 10 def test(mnist): with tf.Graph().as_default() as g: x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) y = mnist_forward.forward(x, None) ema = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) ema_restore = ema.variables_to_restore() saver = tf.train.Saver(ema_restore) correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) while True: with tf.Session() as sess: ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1] accuracy_score = sess.run(accuracy, feed_dict={x: mnist.test.images, y_:mnist.test.labels}) print("After %s training step(s), test accuracy = %g"%(global_step, accuracy_score)) else: print('No checkpoint file found') return time.sleep(TEST_INTNERVAL_SECS) def main(): mnist = input_data.read_data_sets("./data/", one_hot=True) test(mnist) if __name__ == '__main__': main()

4、最后,自己愉快地拍下一张照片,来看看这个亲儿子模型是不是好使。粘一下代码:

import tensorflow as tf import numpy as np import mnist_forward import mnist_backward from PIL import Image def restore_model(testPicArr): #创建一个默认图,在该图中执行以下操作(多数操作和train中一样) with tf.Graph().as_default() as tg: x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) y = mnist_forward.forward(x, None) preValue = tf.argmax(y, 1)#得到概率最大的预测值 variable_averages = tf.train.ExponentialMovingAverage(mnist_backward.MOVING_AVERAGE_DECAY) variable_to_restore = variable_averages.variables_to_restore() saver = tf.train.Saver(variable_to_restore) with tf.Session() as sess: #chevkpoint 文件定位到最新保存的模型 ckpt = tf.train.get_checkpoint_state(mnist_backward.MODEL_SAVE_PATH) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) preValue = sess.run(preValue, feed_dict={x:testPicArr}) return preValue else: print("No checkpoint file found'") return -1 def pre_pic(picName): img = Image.open(picName) reIm = img.resize((28,28), Image.ANTIALIAS) im_arr = np.array(reIm.convert('L')) threshold = 180#设定合理的阈值 for i in range(28): for j in range(28): im_arr[i][j] = 255 - im_arr[i][j] if(im_arr[i][j] < threshold): im_arr[i][j] = 0 else: im_arr[i][j] = 255 nm_arr = im_arr.reshape([1, 784]) nm_arr = nm_arr.astype(np.float32) img_ready = np.multiply(nm_arr, 1.0/255.0) return img_ready def application(): # testNum = int(input("input the number of test pictures:")) testNum = 1 for i in range(testNum): # testPic = input("the path of test pictures:") testPic = 'pic/1.jpg' testPicArr = pre_pic(testPic) preValue = restore_model(testPicArr) print("The prediction number is : ", preValue) if __name__ == '__main__': application()

5、图片如下: 识别结果如下:

root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist# python3 hand_write_app.py The prediction number is : [2] root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist# cd pic root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist/pic# ls 1.jpg root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist/pic# cd ./ root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist/pic# cd ../ root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist# python3 hand_write_app.py The prediction number is : [7] root@iZ2zef0icee95uw35ttpgmZ:/home/kai/mnist# python3 hand_write_app.py The prediction number is : [5]

6、分析: 大家先只看[ ]里面的输出结果,因为中间我使用pscp拷贝替换了图片 识别了三张图片,最后一张6识别成了5,其他两张识别正确,思考和心得体会见下文

---------- 代码已经结束了,就是上面的四个py文件,接下来你看到的是我的分析过程 ------

心得体会

分析一:这里我使用了三张图片,一开始都不能识别,后来我经过调试,发现是阈值 threshold 给的太低了,我一开始给的是50,然后一张也识别不了,代码是没有错的,应该就是在图像预处理这方面出了差错,我先打印输出了 1图片的灰度矩阵 和 2当阈值为50时的 图片灰度反转后的值 和 3当阈值为50时的 图片的样子,测试结果如下: 1图片的灰度矩阵 2当阈值为50时的 图片灰度反转后的值 :明显看出阈值选择错误导致2的特征失真 3当阈值为50时的 图片的样子: 自己脑补一下,反正全是黑的不能训练 分析二: 当阈值为180,我自己调的参数结果明显变好,全是明显的轮廓,所以可以用于训练。

结论

图像的预处理真的很重要,我们要保证图片的特征存在。

最新回复(0)