今天,我結(jié)合代碼來詳細介紹如何使用 SciSharp STACK 的 TensorFlow.NET 來訓練CNN模型,該模型主要實現(xiàn) 圖像的分類 ,可以直接移植該代碼在 CPU 或 GPU 下使用,并針對你們自己本地的圖像數(shù)據(jù)集進行訓練和推理。TensorFlow.NET是基于 .NET Standard 框架的完整實現(xiàn)的TensorFlow,可以支持 SciSharp STACK:https://github.com/SciSharp
什么是TensorFlow.NET?TensorFlow.NET 是 SciSharp STACK
由于TensorFlow.NET在.NET平臺的優(yōu)秀性能,同時搭配SciSharp的NumSharp、SharpCV、Pandas.NET、Keras.NET、Matplotlib.Net等模塊,可以完全脫離Python環(huán)境使用,目前已經(jīng)被微軟ML.NET官方的底層算法集成,并被谷歌寫入TensorFlow官網(wǎng)教程推薦給全球開發(fā)者。
項目說明本文利用TensorFlow.NET構(gòu)建簡單的圖像分類模型,針對工業(yè)現(xiàn)場的印刷字符進行單字符OCR識別,從工業(yè)相機獲取原始大尺寸的圖像,前期使用OpenCV進行圖像預處理和字符分割,提取出單個字符的小圖,送入TF進行推理,推理的結(jié)果按照順序組合成完整的字符串,返回至主程序邏輯進行后續(xù)的生產(chǎn)線工序。 實際使用中,如果你們需要訓練自己的圖像,只需要把訓練的文件夾按照規(guī)定的順序替換成你們自己的圖片即可。支持GPU或CPU方式,該項目的完整代碼在GitHub如下:
模型介紹本項目的CNN模型主要由 2個卷積層&池化層 和 1個全連接層 組成,激活函數(shù)使用常見的Relu,是一個比較淺的卷積神經(jīng)網(wǎng)絡模型。其中超參數(shù)之一"學習率",采用了自定義的動態(tài)下降的學習率,后面會有詳細說明。具體每一層的Shape參考下圖: 數(shù)據(jù)集說明為了模型測試的訓練速度考慮,圖像數(shù)據(jù)集主要節(jié)選了一小部分的OCR字符(X、Y、Z),數(shù)據(jù)集的特征如下:
代碼說明環(huán)境設置
類庫和命名空間引用
主邏輯結(jié)構(gòu)主邏輯:
數(shù)據(jù)集載入數(shù)據(jù)集下載和解壓
字典創(chuàng)建讀取目錄下的子文件夾名稱,作為分類的字典,方便后面One-hot使用 private void FillDictionaryLabel(string DirPath) { string[] str_dir = Directory.GetDirectories(DirPath, "*", SearchOption.TopDirectoryOnly); int str_dir_num = str_dir.Length; if (str_dir_num > 0) { Dict_Label = new Dictionary<Int64, string>(); for (int i = 0; i < str_dir_num; i++) { string label = (str_dir[i].Replace(DirPath + "\\", "")).Split('\\').First(); Dict_Label.Add(i, label); print(i.ToString() + " : " + label); } n_classes = Dict_Label.Count; } }
文件List讀取和打亂從文件夾中讀取train、validation、test的list,并隨機打亂順序。
ArrayFileName_Train = Directory.GetFiles(Name + "\\train", "*.*", SearchOption.AllDirectories); ArrayLabel_Train = GetLabelArray(ArrayFileName_Train); ? ArrayFileName_Validation = Directory.GetFiles(Name + "\\validation", "*.*", SearchOption.AllDirectories); ArrayLabel_Validation = GetLabelArray(ArrayFileName_Validation); ? ArrayFileName_Test = Directory.GetFiles(Name + "\\test", "*.*", SearchOption.AllDirectories); ArrayLabel_Test = GetLabelArray(ArrayFileName_Test);
private Int64[] GetLabelArray(string[] FilesArray) { Int64[] ArrayLabel = new Int64[FilesArray.Length]; for (int i = 0; i < ArrayLabel.Length; i++) { string[] labels = FilesArray[i].Split('\\'); string label = labels[labels.Length - 2]; ArrayLabel[i] = Dict_Label.Single(k => k.Value == label).Key; } return ArrayLabel; }
public (string[], Int64[]) ShuffleArray(int count, string[] images, Int64[] labels) { ArrayList mylist = new ArrayList(); string[] new_images = new string[count]; Int64[] new_labels = new Int64[count]; Random r = new Random(); for (int i = 0; i < count; i++) { mylist.Add(i); } ? for (int i = 0; i < count; i++) { int rand = r.Next(mylist.Count); new_images[i] = images[(int)(mylist[rand])]; new_labels[i] = labels[(int)(mylist[rand])]; mylist.RemoveAt(rand); } print("shuffle array list: " + count.ToString()); return (new_images, new_labels); }
部分數(shù)據(jù)集預先載入Validation/Test數(shù)據(jù)集和標簽一次性預先載入成NDArray格式。 private void LoadImagesToNDArray() { //Load labels y_valid = np.eye(Dict_Label.Count)[new NDArray(ArrayLabel_Validation)]; y_test = np.eye(Dict_Label.Count)[new NDArray(ArrayLabel_Test)]; print("Load Labels To NDArray : OK!"); //Load Images x_valid = np.zeros(ArrayFileName_Validation.Length, img_h, img_w, n_channels); x_test = np.zeros(ArrayFileName_Test.Length, img_h, img_w, n_channels); LoadImage(ArrayFileName_Validation, x_valid, "validation"); LoadImage(ArrayFileName_Test, x_test, "test"); print("Load Images To NDArray : OK!"); } private void LoadImage(string[] a, NDArray b, string c) { for (int i = 0; i < a.Length; i++) { b[i] = ReadTensorFromImageFile(a[i]); Console.Write("."); } Console.WriteLine(); Console.WriteLine("Load Images To NDArray: " + c); } private NDArray ReadTensorFromImageFile(string file_name) { using (var graph = tf.Graph().as_default()) { var file_reader = tf.read_file(file_name, "file_reader"); var decodeJpeg = tf.image.decode_jpeg(file_reader, channels: n_channels, name: "DecodeJpeg"); var cast = tf.cast(decodeJpeg, tf.float32); var dims_expander = tf.expand_dims(cast, 0); var resize = tf.constant(new int[] { img_h, img_w }); var bilinear = tf.image.resize_bilinear(dims_expander, resize); var sub = tf.subtract(bilinear, new float[] { img_mean }); var normalized = tf.divide(sub, new float[] { img_std }); using (var sess = tf.Session(graph)) { return sess.run(normalized); } } }
計算圖構(gòu)建構(gòu)建CNN靜態(tài)計算圖,其中學習率每n輪Epoch進行1次遞減。 #region BuildGraph public Graph BuildGraph() { var graph = new Graph().as_default(); tf_with(tf.name_scope("Input"), delegate { x = tf.placeholder(tf.float32, shape: (-1, img_h, img_w, n_channels), name: "X"); y = tf.placeholder(tf.float32, shape: (-1, n_classes), name: "Y"); }); var conv1 = conv_layer(x, filter_size1, num_filters1, stride1, name: "conv1"); var pool1 = max_pool(conv1, ksize: 2, stride: 2, name: "pool1"); var conv2 = conv_layer(pool1, filter_size2, num_filters2, stride2, name: "conv2"); var pool2 = max_pool(conv2, ksize: 2, stride: 2, name: "pool2"); var layer_flat = flatten_layer(pool2); var fc1 = fc_layer(layer_flat, h1, "FC1", use_relu: true); var output_logits = fc_layer(fc1, n_classes, "OUT", use_relu: false); //Some important parameter saved with graph , easy to load later var img_h_t = tf.constant(img_h, name: "img_h"); var img_w_t = tf.constant(img_w, name: "img_w"); var img_mean_t = tf.constant(img_mean, name: "img_mean"); var img_std_t = tf.constant(img_std, name: "img_std"); var channels_t = tf.constant(n_channels, name: "img_channels"); //learning rate decay gloabl_steps = tf.Variable(0, trainable: false); learning_rate = tf.Variable(learning_rate_base); //create train images graph tf_with(tf.variable_scope("LoadImage"), delegate { decodeJpeg = tf.placeholder(tf.@byte, name: "DecodeJpeg"); var cast = tf.cast(decodeJpeg, tf.float32); var dims_expander = tf.expand_dims(cast, 0); var resize = tf.constant(new int[] { img_h, img_w }); var bilinear = tf.image.resize_bilinear(dims_expander, resize); var sub = tf.subtract(bilinear, new float[] { img_mean }); normalized = tf.divide(sub, new float[] { img_std }, name: "normalized"); }); tf_with(tf.variable_scope("Train"), delegate { tf_with(tf.variable_scope("Loss"), delegate { loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels: y, logits: output_logits), name: "loss"); }); tf_with(tf.variable_scope("Optimizer"), delegate { optimizer = tf.train.AdamOptimizer(learning_rate: learning_rate, name: "Adam-op").minimize(loss, global_step: gloabl_steps); }); tf_with(tf.variable_scope("Accuracy"), delegate { var correct_prediction = tf.equal(tf.argmax(output_logits, 1), tf.argmax(y, 1), name: "correct_pred"); accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name: "accuracy"); }); tf_with(tf.variable_scope("Prediction"), delegate { cls_prediction = tf.argmax(output_logits, axis: 1, name: "predictions"); prob = tf.nn.softmax(output_logits, axis: 1, name: "prob"); }); }); return graph; } /// <summary> /// Create a 2D convolution layer /// </summary> /// <param name="x">input from previous layer</param> /// <param name="filter_size">size of each filter</param> /// <param name="num_filters">number of filters(or output feature maps)</param> /// <param name="stride">filter stride</param> /// <param name="name">layer name</param> /// <returns>The output array</returns> private Tensor conv_layer(Tensor x, int filter_size, int num_filters, int stride, string name) { return tf_with(tf.variable_scope(name), delegate { var num_in_channel = x.shape[x.NDims - 1]; var shape = new[] { filter_size, filter_size, num_in_channel, num_filters }; var W = weight_variable("W", shape); // var tf.summary.histogram("weight", W); var b = bias_variable("b", new[] { num_filters }); // tf.summary.histogram("bias", b); var layer = tf.nn.conv2d(x, W, strides: new[] { 1, stride, stride, 1 }, padding: "SAME"); layer += b; return tf.nn.relu(layer); }); } /// <summary> /// Create a max pooling layer /// </summary> /// <param name="x">input to max-pooling layer</param> /// <param name="ksize">size of the max-pooling filter</param> /// <param name="stride">stride of the max-pooling filter</param> /// <param name="name">layer name</param> /// <returns>The output array</returns> private Tensor max_pool(Tensor x, int ksize, int stride, string name) { return tf.nn.max_pool(x, ksize: new[] { 1, ksize, ksize, 1 }, strides: new[] { 1, stride, stride, 1 }, padding: "SAME", name: name); } /// <summary> /// Flattens the output of the convolutional layer to be fed into fully-connected layer /// </summary> /// <param name="layer">input array</param> /// <returns>flattened array</returns> private Tensor flatten_layer(Tensor layer) { return tf_with(tf.variable_scope("Flatten_layer"), delegate { var layer_shape = layer.TensorShape; var num_features = layer_shape[new Slice(1, 4)].size; var layer_flat = tf.reshape(layer, new[] { -1, num_features }); return layer_flat; }); } /// <summary> /// Create a weight variable with appropriate initialization /// </summary> /// <param name="name"></param> /// <param name="shape"></param> /// <returns></returns> private RefVariable weight_variable(string name, int[] shape) { var initer = tf.truncated_normal_initializer(stddev: 0.01f); return tf.get_variable(name, dtype: tf.float32, shape: shape, initializer: initer); } /// <summary> /// Create a bias variable with appropriate initialization /// </summary> /// <param name="name"></param> /// <param name="shape"></param> /// <returns></returns> private RefVariable bias_variable(string name, int[] shape) { var initial = tf.constant(0f, shape: shape, dtype: tf.float32); return tf.get_variable(name, dtype: tf.float32, initializer: initial); } /// <summary> /// Create a fully-connected layer /// </summary> /// <param name="x">input from previous layer</param> /// <param name="num_units">number of hidden units in the fully-connected layer</param> /// <param name="name">layer name</param> /// <param name="use_relu">boolean to add ReLU non-linearity (or not)</param> /// <returns>The output array</returns> private Tensor fc_layer(Tensor x, int num_units, string name, bool use_relu = true) { return tf_with(tf.variable_scope(name), delegate { var in_dim = x.shape[1]; var W = weight_variable("W_" + name, shape: new[] { in_dim, num_units }); var b = bias_variable("b_" + name, new[] { num_units }); var layer = tf.matmul(x, W) + b; if (use_relu) layer = tf.nn.relu(layer); return layer; }); } #endregion
模型訓練和模型保存
測試集預測
總結(jié)本文主要是.NET下的TensorFlow在實際工業(yè)現(xiàn)場視覺檢測項目中的應用,使用SciSharp的TensorFlow.NET構(gòu)建了簡單的CNN圖像分類模型,該模型包含輸入層、卷積與池化層、扁平化層、全連接層和輸出層,這些層都是CNN分類模型的必要的層,針對工業(yè)現(xiàn)場的實際圖像進行了分類,分類準確性較高。 完整代碼可以直接用于大家自己的數(shù)據(jù)集進行訓練,已經(jīng)在工業(yè)現(xiàn)場經(jīng)過大量測試,可以在GPU或CPU環(huán)境下運行,只需要更換tensorflow.dll文件即可實現(xiàn)訓練環(huán)境的切換。 同時,訓練完成的模型文件,可以使用 “CKPT+Meta” 或 凍結(jié)成“PB” 2種方式,進行現(xiàn)場的部署,模型部署和現(xiàn)場應用推理可以全部在.NET平臺下進行,實現(xiàn)工業(yè)現(xiàn)場程序的無縫對接。擺脫了以往Python下 需要通過Flask搭建服務器進行數(shù)據(jù)通訊交互 的方式,現(xiàn)場部署應用時無需配置Python和TensorFlow的環(huán)境【無需對工業(yè)現(xiàn)場的原有PC升級安裝一大堆環(huán)境】,整個過程全部使用傳統(tǒng)的.NET的DLL引用的方式。 歡迎廣大.NET開發(fā)者們加入TensorFlow.NET社區(qū),SciSharp STACK QQ群:461855582 ,或有任何問題可以直接聯(lián)系我的個人QQ:50705111 。 SciSharp STACK QQ群:
我的個人QQ:
|
|