跳到主要內容

臺灣博碩士論文加值系統

(44.222.134.250) 您好!臺灣時間:2024/10/13 10:09
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝伊婷
研究生(外文):HSIEH, YI-TING
論文名稱:人工注脂牛肉之自動化檢測
論文名稱(外文):Automated inspection of artificial marbling beefs
指導教授:林宏達林宏達引用關係
指導教授(外文):Lin, Hong-Dar
口試委員:張嘉寶邱元錫
口試委員(外文):Chang, Chia-PaoChiu, Yuan-Shyi
口試日期:2020-06-19
學位類別:碩士
校院名稱:朝陽科技大學
系所名稱:工業工程與管理系
學門:工程學門
學類:工業工程學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:中文
論文頁數:116
中文關鍵詞:食品詐欺人工注脂牛肉檢測電腦視覺局部二值模式紋路特徵機器學習
外文關鍵詞:Food fraudInspection of artificial marbling beefComputer visionLocal Binary PatternsTexture featureMachine learning
相關次數:
  • 被引用被引用:1
  • 點閱點閱:213
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
牛肉注脂技術的出現是為了使原來等級較低的肉,提升食用價值而發展出的技術,而在經過脂肪注射後的人工注脂牛肉擁有了與和牛相同含有密集脂肪分布的外觀特徵。由於和牛價格昂貴而注脂牛肉成本低,因此市面上出現了一種「和牛級」的人工注脂牛肉,許多業者將注脂牛肉打著和牛的名號,以高單價的價格出售謀取暴利,而消費者卻以高價錢買到低品質的牛肉,且可能產生食品安全疑慮。因此本研究針對此狀況提出一套注脂牛肉檢測系統,期望能夠提供消費者在選購牛肉時,利用手持行動裝置擷取牛肉影像,對牛肉表面紋路與顏色進行辨別,資料經傳輸至伺服器端分析後,即可獲得影像分類結果,可即時得知該待測影像是否為人工注脂牛肉,進而保障消費者的權益以減少食品詐欺的問題。

本研究在取得手持裝置所擷取之牛肉影像後,使用 ROI (Region Of Interest)遮罩排除影像之背景與干擾物後進行影像分格,接著分別提取影像分格區塊之局部二值模式(Local Binary Patterns, LBP)紋路特徵及 RGB 色彩特徵後,使用支援向量機(Support Vector Machine, SVM)模式進行分格區塊影像分類,最後則採多數決方式判別該待測影像之類別。本研究使用 360張訓練影像及 180 張測試影像進行實驗,實驗結果顯示可以有效地辨別出三種不同牛肉類別,其注脂牛肉檢出率(1-)為 95.00%、注脂牛肉誤判率()為 1.67%、正確分類率(CR)為 93.89%及 F1-指標(F1-Score)為 95.80%。
In order to make the original lower-price beef into higher-price beef and increase food value, the technology of injecting fat in beef was invented.
The fat-injected beef is called “artificial marbling beef” and it has the same appearance characteristics with dense fat distribution as those of Wagyu beef. Since Wagyu beef is expensive and the artificial marbling beef is cheap, the “Wagyu-grade” artificial marbling beef becomes common in food fraud. Many sellers sell the artificial marbling beef under the name of Wagyu beef for higher price to earn substantial profit. This leads to consumer using high price to purchase not only low-grade beef but also with food safety concerns. Therefore, this study proposes an automated inspection system of artificial marbling beefs and expect to solve the food fraud problems for protecting consumer’s right.

In this study, after obtaining the beef images by using handheld devices, we use ROI (Region of Interest) mask to get rid of the background and other interference items from the original image. And then, we grid the image into many equal-sized blocks. After the above pretreatments, we can extract the texture features LBP (Local Binary Patterns) and RGB color vectors from each gridding block. We apply these feature vectors as input to classify these gridding blocks into three beef categories by SVM (Support Vector Machine) model. Finally, we take the majority to determine the beef category of each image. We use 360 training images and 180 testing images to carry out the experiments. The experimental results show the proposed system can effectively achieves 95.00% artificial marbling beef detection rate, 1.67% non-artificial marbling beef false alarm rate, 93.89% correct classification rate (CR) and 95.80% F1-score.
目錄
摘要 I
Abstract II
致謝 III
目錄 IV
表目錄 VIII
圖目錄 X
第一章 緒論 1
1.1前言 1
1.2 人工注脂肉所引發的消費糾紛 1
1.3 日本和牛 2
1.4 人工注脂牛肉 4
1.5 日本和牛牛肉與人工注脂牛肉的差異 6
1.6 研究動機與目的 7
1.7 研究限制 7
1.8 論文架構 8
第二章 文獻探討 9
2.1 食品詐欺 9
2.2 牛肉檢驗 10
2.3 LBP紋路特徵分析 12
2.4 機器學習之分類模式 13
2.4.1 倒傳遞類神經網路( Back Propagation Network, BPN) 14
2.4.2 支援向量基模式(Support Vector Machine, SVM) 15
2.4.3 卷積神經網路 (Convolutional Neural Network, CNN) 16
第三章 研究方法相關原理 19
3.1 局部二值模式 19
3.1.1 LBP的原始計算方法 19
3.1.2 LBP均勻模式 20
3.2 色彩空間 22
3.2.1 RGB模式 22
3.2.2 HSV模式 23
3.2.3 CIE L*a*b模式 25
3.3 分類器 26
3.3.1倒傳遞神經網路 27
3.3.2支援向量機模式 30
3.3.3卷積神經網路 36
第四章 研究流程與技術應用 40
4.1 牛肉影像之前處理 42
4.1.1 提取牛肉影像ROI區域 42
4.2 牛肉影像之特徵提取 44
4.2.1 牛肉影像之RGB色彩特徵 44
4.2.2 牛肉影像之HSV色彩特徵 46
4.2.3牛肉影像之CIE L*a*b*色特徵 48
4.2.4 牛肉紋路之LBP均勻模式 50
4.2.5特徵值選擇及搭配 55
4.3 牛肉影像之分類器應用 56
4.3.1 支援向量機SVM用於牛肉特徵之分類 56
4.3.2倒傳遞神經網路BPN用於牛肉特徵之分類 60
4.3.3 卷積神經網路CNN用於分格牛肉影像之分類 66
4.4 人工注脂牛肉之分類系統 68
第五章 實驗與研究分析 70
5.1 使用者端影像拍攝與擷取 71
5.2 檢測系統發展 71
5.3 牛肉類別檢測之績效指標 72
5.4 檢測方法之較佳參數設定 78
5.4.1 分格區塊大小之參數設定 79
5.4.2 LBP特徵運算子之參數設定 84
5.4.3 SVM分類模式之參數設定 86
5.4.4 CNN分類模式之參數設定 87
5.4.5 不同特徵組合搭配之特徵向量設定 88
5.5 大樣本之牛肉檢測方法績效分析 90
5.6 敏感度分析 95
5.6.1 ROI遮罩大小對檢測效益之影響 95
5.6.2 其他雜訊對檢測效益之影響 97
5.6.3 影像亮度的改變對檢測績效之影響 99
5.6.4 影像拍攝角度的改變對檢測績效之影響 101
第六章 結論與後續發展方向 106
6.1 結論 106
6.2 未來研究方向 107
參考文獻 109

表目錄
表1 近年注脂牛肉之消費糾紛與新聞案例 2
表2 注脂牛肉與和牛外表差異比較 6
表3 本研究與牛肉品質檢測相關文獻比較表 18
表4 分格區塊之RGB影像及各分量影像 46
表5 分格區塊之HSV影像及各分量影像 47
表6 分格區塊之CIE L*A*B*影像及各分量影像 49
表7 本研究影像在不同色彩模式下各分量直方圖之相似及相異處 50
表8 分格區塊灰階圖像及在各LBP形態下的圖像 51
表9 個別影像區塊中每個編碼值之數量統計 55
表10 本研究使用LBP搭配各種顏色特徵組合方法 56
表11 本研究SVM網路之參數設定表 59
表12 BPN網路之主要參數表 61
表13 本研究CNN網路之參數設定表 67
表14 牛肉分類之混淆矩陣表 75
表15 各PRECISION值及RECALL值與平均數計算結果 77
表16 本研究實驗所使用之小樣本數量 78
表17 不同分格區塊大小所對應之完整分格區塊數量 80
表18 在分格區塊為基礎下採用不同分格區塊大小之檢測效益結果比較 80
表19 在影像張數為基礎下採用不同分格區塊大小之績效結果比較 82
表20 採用不同LBP紋路特徵運算子之績效指標結果比較表 84
表21 採用SVM分類模式之不同參數組合分格區塊正確率(B_CR)% 86
表22 採用CNN模式之不同參數設定值之檢測效益比較(以分格區塊為單位) 87
表23 採用CNN模式之不同參數設定值之檢測效益比較(以影像張數為單位) 87
表24 不同特徵組合搭配之檢測效益結果比較(以分格區塊為基礎) 88
表25 本研究檢測方法之較佳參數設定 90
表26 本研究實驗所使用之大樣本數量 91
表27 本研究採用不同分類器之優缺點比較表 93
表28 不同分類器之大樣本實驗結果效益指標 93
表29 本研究之大樣本影像之不同分類器效率指標 93
表30 不同ROI遮罩大小之績效結果比較表 96
表31 其他雜訊對本研究影響之效益評估表 98
表32 樣本影像不同亮度之參數表 101
表33 改變亮度對檢測系統之績效評估 101
表34 不同傾斜角度對檢測系統之效益評估 104

圖目錄
圖1 日本和牛牛肉與人工注脂牛肉 2
圖2 2018年前半年台灣進口日本和牛之進口量 3
圖3日本和牛分級標準 4
圖4 注脂牛肉的加工過程 5
圖5 原始LBP碼生成示意圖 20
圖6 不同LBP運算子 20
圖7 LBP旋轉不變模式 21
圖8 RGB影像與各分量影像 23
圖9 HSV色彩空間示意圖 24
圖10 HSV影像與各分量影像 24
圖11 CIE L*A*B色彩空間示意圖 25
圖12 CIE L*A*B*影像與各分量影像 26
圖13 BPN模型之基本網路架構 28
圖14 最佳超平面區分示意圖 31
圖15 資料轉換至特徵空間示意圖 33
圖16 支援向量機分類示意圖 33
圖17 加入寬鬆變數之SVM示意圖 35
圖18 CNN網路架構示意圖 37
圖19 卷積方法示意圖 37
圖20 最大及最小池化方法 38
圖21 研究方法流程圖 41
圖22 原始影像加入遮罩後進行ROI區域擷取 42
圖23 ROI區域影像進行分格 43
圖24 影像區塊轉換為轉灰階影像 51
圖25 原始LBP之二進制數值轉換方法 52
圖26 分格區塊由灰階轉換UNIFORM LBP 54
圖27 將分格區塊影像刪除濾波後產生的黑色方框 55
圖28 本研究支援向量機模式示意圖 58
圖29 本研究類神經網路之訓練流程圖 60
圖30 注脂牛肉檢測之倒傳遞神經網路架構圖 62
圖31 本研究之卷積神經網路架構圖 66
圖32 本研究卷積網路之架構示意圖 68
圖33 牛肉類別檢測系統之流程示意圖 69
圖34 本研究之實驗架構 70
圖35 待測影像拍攝之取像範圍適當與不適當範例 71
圖36 本研究所開發之使用者操作介面 72
圖37 以分格區塊為基礎之牛肉分類混淆矩陣計算示意圖 75
圖38 以影像張數為基礎之牛肉分類混淆矩陣計算示意圖 75
圖39 不同類別分格區塊數量像同之狀況示意圖 76
圖40 PRECISION及RECALL值與兩種平均數關係圖 78
圖41 不同分格區塊大小之ROI影像 79
圖42 在分格區塊為基礎下採用不同分格區塊大小之注脂牛肉檢出率與誤判率ROC曲線 81
圖43 在分格區塊為基礎下採用不同分格區塊大小之分格區塊正確分類率與非注脂牛肉誤判率ROC曲線 81
圖44 在影像張數為基礎下採用不同分格區塊大小之注脂牛肉檢出率與誤判率ROC曲線 82
圖45 在影像張數為基礎下採用不同分格區塊大小之影像正確分類率與非注脂牛肉誤判率ROC曲線 83
圖46 以分格區塊基礎及以影像張數基礎分別採用不同分格區塊大小之正確分類率比較圖 83
圖47 在分格區塊為基礎下採用不同LBP運算子之注脂牛肉檢出率與誤判率ROC曲線 85
圖48 在分格區塊為基礎下採用不同LBP運算子之分格區塊正確分類率與非注脂牛肉誤判率ROC曲線 85
圖49 採用SVM分類模式之不同參數組合之分格區塊正確率(B_CR)% 86
圖50 在分格區塊為基礎下採用不同特徵搭配之注脂牛肉檢出率與誤判率ROC曲線 89
圖51 在分格區塊為基礎下採用不同特徵搭配之分格區塊正確分類率與非非注脂牛肉誤判率ROC曲線 89
圖52 不同分類器之實驗結果影像 92
圖53 以分格區塊為基礎之大樣本實驗結果績效指標 94
圖54 以影像張數為基礎之大樣本實驗結果績效指標 94
圖55 不同ROI遮罩大小之實驗結果影像 96
圖56 不同遮罩大小之實驗結果績效指標 97
圖57 包含雜訊之待測影像 97
圖58 其他雜訊下之實驗結果影像 98
圖59 其他雜訊之實驗結果績效指標 99
圖60 影像亮度改變下之實驗結果影像 100
圖61 不同亮度對檢測系統之績效指標 101
圖62 樣本影像未加入遮罩之各方向傾斜狀況 102
圖63 樣本影像加入遮罩後不同角度之傾斜狀況 102
圖64 不同傾斜角度下之實驗結果影像 103
圖65 各傾斜角度對檢測系統之效益評估 104

[1]Abuzneid, M. A., and Mahmood, A., “Enhanced human face recognition using LBPH descriptor, multi-KNN, and back-propagation neural network,” IEEE Access, Vol. 6, pp. 20641-20651, (2018).
[2]Akram, M. W., Li, G., Jin, Y., Chen, X., Zhu, C., Zhao, X., Khaliq, A., Faheem, M., and Ahmad, A., “CNN based automatic detection of photovoltaic cell defects in electroluminescence images,” Energy, Vol. 189, 116319, (2019).
[3]Arsalane, A., El Barbri, N., Tabyaoui, A., Klilou, A., Rhofir, K., and Halimi, A. “An embedded system based on DSP platform and PCA-SVM algorithms for rapid beef meat freshness prediction and identification,” Computers and Electronics in Agriculture, Vol. 152, pp. 385-392, (2018).
[4]Backes, A. R., and Junior, J. J. d. M. S., “LBP maps for improving fractal based texture classification,” Neurocomputing, Vol. 266, pp. 1-7, (2017).
[5]Backes, A. R., and Junior, J. J. d. M. S., “LBP maps for improving fractal based texture classification,” Neurocomputing, Vol. 266, pp. 1-7, (2017).
[6]Banik, P.P., Saha, R., and Kim, K.-D., “An Automatic Nucleus Segmentation and CNN Model based Classification Method of White Blood Cell,” Expert Systems with Applications, Vol. 149, 113211, (2020).
[7]Bastidas-Rodriguez, M.X., Prieto-Ortiz, F.A., Espejo, E., “Fractographic classification in metallic materials by using computer vision,” Engineering Failure Analysis, Vol. 59, pp. 237-252, (2016).
[8]Bhatt, P., Rusiya, P., and Birchha, V., “WAGBIR: Wavelet and Gabor Based Image Retrieval Technique for the Spatial-Color and Texture Feature Extraction Using BPN in Multimedia Database,” 2014 International Conference on Computational Intelligence and Communication Network, pp. 284-288, (2014).
[9]Boubchir, L., and Fadili, J.M., “Multivariate statistical modeling of images with the curvelet transform,” IEEE Xplore, No. 1581046, pp. 747-750, (2005).
[10]Cao, Y., Zheng, K., Jiang, J., Wu, J., Shi, F., Song, X., and Jiang, Y., “A novel method to detect meat adulteration by recombinase polymerase amplification and SYBR green I,” Food Chemistry, Vol. 266, pp. 73-78 (2018).
[11]Chen, K., and Qin, C., “Segmentation of beef marbling based on vision threshold,” Computers and Electronics in Agriculture, Vol.62, isu.2, pp.223-230(2008).
[12]Chen, S., Xiong, J., Guo, W., Bu, R., Zheng, Z., Chen, Y., Yang, Z., and Lin, R., “Colored rice quality inspection system using machine vision,” Journal of Cereal Science, Vol. 88, pp. 87-95, (2019).
[13]Chen, X., Xun, Y., Li, W., and Zhang, J., “Combining discriminant analysis and neural networks for corn variety identification,” Computers and Electronics in Agriculture, Vol. 71, Sup. 1, pp. s48-s53, (2010).
[14]Cheng, W., Cheng, J., Sun, D., and Pu, H., “Marbling Analysis for Evaluating Meat Quality: Methods and Techniques,” Comprehensive Reviews in Food Science and Food Safety, Vol.14, isu.5(2015).
[15]Ciocca, G., Napoletano, P., and Schettini, R., “CNN-based features for retrieval and classification of food images,” Computer Vision and Image Understanding, Vol. 176-177, pp. 70-77, (2018).
[16]Cortes, C., and Vapnik, V., “Support-vector networks,” Machine Learning, Vol. 20, pp. 273-297, (1995).
[17]ElMasry, G., Sun, D.-W., Allen, P., “Near-infrared hyperspectral imaging for predicting colour, pH and tenderness of fresh beef,” Journal of Food Engineering, Vol. 110, No. 1, pp. 127-140, (2012).
[18]Guellis, C., Valério, D.C., Bessegato, G.G., Boroski, M., Dragunski, J.C., and Lindino, C.A. “Non-targeted method to detect honey adulteration: Combination of electrochemical and spectrophotometric responses with principal component analysis,” Journal of Food Composition and Analysis, Vol. 89, 103446, (2020).
[19]Hao, X., and Liang, H., “A multi-class support vector machine real-time detection system for surface damage of conveyor belts based on visual saliency,” Measurement, Vol. 146, pp. 125-132, (2019).
[20]Hosseinpour, S., Ilkhchi, A.H., and Aghbashlo, M., “An intelligent machine vision-based smartphone app for beef quality evaluation,” Journal of Food Engineering, Vol. 248, pp. 9-22, (2019).
[21]Jackman, P., Sun, D.-W., Allen, P., Brandon, K., White, A., “Correlation of consumer assessment of longissimus dorsi beef palatability with image colour, marbling and surface texture features,” Meat Science, Vol. 84, No. 3, pp. 564-568 (2010).
[22]Jackman, P., Sun, D.-W., and Allen, P., “Prediction of beef palatability from colour, marbling and surface texture features of longissimus dorsi,” Journal of Food Engineering, Vol. 96, No. 1, pp. 151-165, (2010).
[23]Kalakech, M., Porebski, A., Vandenbroucke,N,. and Hamad, D., “A new LBP histogram selection score for color texture classification,” 2015 International Conference on Image Processing Theory, Tools and Applications (2015).
[24]Karthik, R., Hariharan, M., Anand, S., Mathikshara, P., Johnson, A., and Menaka, R. “Attention embedded residual CNN for disease detection in tomato leaves,” Applied Soft Computing, Vol. 86, 105933, (2020).
[25]Kozłowski, M., Górecki, P., and Szczypiński, P., “Varietal classification of barley by convolutional neural networks,” Biosystems Engineering, Vol. 184, pp. 155-165, (2019),
[26]Kumar, S.S., Abraham, D.M., Jahanshahi, M.R., Iseley, T., and Starr, J., “Automated defect classification in sewer closed circuit television inspections using deep convolutional neural networks,” Automation in Construction, Vol. 91, pp. 237-283, (2018).


[27]Le, V. N. H., Apopei, B., and Alameh, K., “Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods,” Information Processing in Agriculture, (2018).
[28]Lee, B., Yoon, S., and Choi, Y.M., “Comparison of marbling fleck characteristics between beef marbling grades and its effect on sensory quality characteristics in high-marbled Hanwoo steer,” Meat Science, Vol. 152, pp. 109-115, (2019).
[29]Lee, Y., Lee, B., Kim, H.K., Yun, Y.K., Kang, S., Kim, K.T., Kim, B.D., Kim, E.J., and Choi, Y.M., “Sensory quality characteristics with different beef quality grades and surface texture features assessed by dented area and firmness, and the relation to muscle fiber and bundle characteristics,” Meat Science, Vol. 145, pp. 195-201, (2015).
[30]Lei, Y., Zhao, X., Wang, G., Yu, K., Guo, W., “A novel approach for cirrhosis recognition via improved LBP algorithm and dictionary learning,” Biomedical Signal Processing and Control, Vol. 38, pp. 281-292, (2017).
[31]Li, J., Tan, J., and Shatadal, P., “Classification of tough and tender beef by image texture analysis,” Meat Science, Vol. 57, No. 4, pp. 341-346, (2001).
[32]Li, J., Tan, J., Martz, F.A., and Heymann, H., “Image texture features as indicators of beef tenderness,” Meat Science, Vol. 53, No. 1, pp. 17-22, (1999).
[33]Li, T.-S., “Applying wavelets transform, rough set theory and support vector machine for copper clad laminate defects classification,” Expert Systems with Applications, Vol. 36, No. 3 pp. 5822-5829, (2009).
[34]Martínez, S.S., Ortega Vázquez, C., Gámez García, J., and Gómez Ortega, J., “Quality inspection of machined metal parts using an image fusion technique,” Measurement, Vol. 111, pp. 374-383, (2017).


[35]Mohannad, A., and Ausif, M., “Performance improvement for 2-D face recognition using multi-classifier and BPN,” 016 IEEE Long Island Systems, Applications and Technology Conference (LISAT), pp. 1-7, (2016).
[36]Momeny, M., Jahanbakhshi, A., Jafarnezhad, K., and Zhang, Y.-D., “Accurate classification of cherry fruit using deep CNN based on hybrid pooling approach,” Postharvest Biology and Technology, Vol. 166, 111204, (2020).
[37]Muhammad, G., “Date fruits classification using texture descriptors and shape-size features,” Engineering Applications of Artificial Intelligence, Vol. 37, pp. 361-367, (2015).
[38]Ojala, T., Pietikainen, M., and Harwood, D., “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions,” Proceedings of 12th International Conference on Pattern Recognition, Vol. 1, pp. 582-585, (1994).
[39]Orrillo, I, Cruz-Tirado, J.P., Cardenas, A., Oruna, M., Carnero, A., Barbin, D.F., and Siche, R., “Hyperspectral imaging as a powerful tool for identification of papaya seeds in black pepper,” Food Control, Vol. 101, pp. 42-52 (2019).
[40]Parikh, H., Patel, S., and Patel, V., “Classification of SAR and PolSAR images using deep learning: a review,” International Journal of Image and Data Fusion, Vol. 11, No. 1, pp. 1-32, (2020).
[41]Peng, G.J., Chang, M.H., Fang, M., Liao, C.D., Tsai, C.F., Tseng, S.H., Kao, Y.M., Chou and Cheng, H.F., “Incidents of major food adulteration in Taiwan between 2011 and 2015,” Food Control, Vol. 72, pp. 145-152, (2017).
[42]Ruth, S.M.V., Huisman, W., and Luning, P.A., “Food fraud vulnerability and its key factors,” Trends in Food Science & Technology, Vol. 67, pp. 70-75, (2017).
[43]Samanta, B., Al-Balushi, K.R., and Al-Araimi, S.A., “Artificial neural networks and support vector machines with genetic algorithm for bearing fault detection,” Engineering Applications of Artificial Intelligence, Vol. 16, No. 7-8, pp. 657-665, (2003).
[44]Sezer, B., Apaydin, H., Bilge, G., and Boyaci, I.H., “Coffee arabica adulteration: Detection of wheat, corn and chickpea,” Food Chemistry, Vol. 264, pp. 142-148, (2018).
[45]Shanmugamani, R., Sadique, M.F., and Ramamoorthy, B., “Detection and classification of surface defects of gun barrels using computer vision and machine learning,” Measurement, Vol. 60, pp. 222-230, (2015).
[46]Shiranita, K., Hayashi, K., Otsubo, A., Miyajima, T., and Takiyama, R., “Determination of meat quality by image processing and neural network techniques,” Ninth IEEE International Conference on Fuzzy Systems. FUZZ-IEEE 2000, Vol. 2, pp. 989-992, (2000).
[47]Shiranita, K., Hayashi, K., Otsubo, A., Miyajima, T., and Takiyama, R., “Grading meat quality by image processing,” Pattern Recognition, Vol. 33, No. 1, pp. 97-104, (2000).
[48]Singh, P., Roy, P. P., and Raman, B., “Writer identification using texture features: A comparative study,” Computers & Electrical Engineering, Vol. 71, pp.1-12(2018).
[49]Tosin, A. T., Morufat, A. T., Omotayo, O. M., Bolanle, W. W., Olusayo, O. E., and Olatunde, O. S., “Curvelet Transform-Local Binary Pattern Feature Extraction Technique for Mass Detection and Classification in Digital Mammogram,” Current Journal of Applied Science and Technology, Vol. 28, No. 3, pp. 1-15, (2018).
[50]Vapnik, V., and Lerner, A., “Pattern recognition using generalized portrait method,” Automation and Remote Control, Vol. 24, No. 6, pp. 774-780, (1963).
[51]Velásquez, L, Cruz-Tirado, J.P., Siche, R., and Quevedo R., “An application based on the decision tree to classify the marbling of beef by hyperspectral imaging,” Meat Science, Vol. 133, pp. 43-50, (2017).
[52]Wang, J., Fu, P., and Gao, R.X., “Machine vision intelligence for product defect inspection based on deep learning and Hough transform," Journal of Manufacturing Systems,” Vol. 55, pp. 52-60, (2019).
[53]Wang, Y., Shi, C., Wang, C., and Xiao, B., “Ground-based cloud classification by learning stable local binary patterns,” Atmospheric Research, Vol. 207, pp. 74-89, (2018).
[54]Xiu, C. and Klein, K.K., “Melamine in milk products in China: Examining the factors that led to deliberate use of the contaminant,” Food Policy, Vol. 35, No. 5, pp. 463-470, (2010).
[55]Yang, H., Wang, X., Wang, Q., and Zhang, X., “LS-SVM based image segmentation using color and texture information,” Journal of Visual Communication and Image Representation, Vol. 23, No.7, pp. 1095-1112, (2012).
[56]Yu, M., Gu, D., and Wang, Y., “Histogram similarity measure using variable bin size distance,” Computer Vision and Image Understanding, Vol. 114, isu.8, pp. 981-989, (2010).
[57]Zhang, X., Ding, Y., Lv, Y., Shi, A., and Liang, R., “A vision inspection system for the surface defects of strongly reflected metal based on multi-class SVM,” Expert Systems with Applications, Vol. 38, No. 5, pp. 5930-5939, (2011).
[58]日本肉類分級協會,http://www.jmga.or.jp/standrad/beef/
[59]日經BP社,牛脂注入肉,https://style.nikkei.com/article/DGXNASFK0805T_Y3A101C1000000
[60]梁育維,「以紋理與顏色為基礎的樹皮影像辨識系統」,碩士論文,崑山科技大學資訊管理系研究所(2017)。
[61]莊豐閣,「應用倒傳遞類神經網路於BGA外形瑕疵檢測與量測」,碩士論文,龍華科技大學機械系研究所(2006)。
[62]許哲榮,「應用影像分割法結合倒傳遞類神經網路於印刷電路板之光學檢驗」,碩士論文,大同大學機械工程學系研究所(2007)。
[63]逍遙文工作室,L*a*b* 色彩空間 ,https://cg2010studio.com/2011/11/13/lab-%E8%89%B2%E5%BD%A9%E7%A9%BA%E9%96%93-lab-color-space/
[64]陳孟佐,「以多元局部特徵為基礎的紋理影像檢索及分類之研究」,碩士論文,義守大學資訊工程學系研究所(2012)。

[65]維基百科,HSL和HSV色彩空間,https://zh.wikipedia.org/wiki/HSL%E5%92%8CHSV%E8%89%B2%E5%BD%A9%E7%A9%BA%E9%97%B4
[66]維基百科,食品安全,https://zh.wikipedia.org/wiki/%E9%A3%9F%E5%93%81%E5%AE%89%E5%85%A8
[67]鍾榮倫,「自動化牛肉品質檢測與分級系統」,碩士論文,朝陽科技大學工業工程與管理系研究所(2018)。

電子全文 電子全文(網際網路公開日期:20250619)
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊