跳到主要內容

臺灣博碩士論文加值系統

(44.220.247.152) 您好!臺灣時間:2024/09/12 05:27
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:郭峻豪
研究生(外文):Chun-HaoKuo
論文名稱:大數據分析之探討-以平板顯示器製程參數優化為例
論文名稱(外文):On Big Data Analysis - A Case Study On The Parameters Of Flat Panel Display Optimization Process
指導教授:楊竹星楊竹星引用關係
指導教授(外文):Chu-Sing Yang
學位類別:碩士
校院名稱:國立成功大學
系所名稱:電機工程學系碩士在職專班
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2019
畢業學年度:107
語文別:中文
論文頁數:54
中文關鍵詞:大數據分析機器學習工業4.0良率提升
外文關鍵詞:Big Data AnalysisMachine LearningI ndustry 4.0Yield
相關次數:
  • 被引用被引用:0
  • 點閱點閱:158
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
台灣平面顯示器製造聞名國際,其傑出的製造能力能在最短時間內製造出品質優良的面板,在平面顯示器生產過程中,製程良率是反映出製造商好與壞的最快指標,良率直接決定到的是公司的獲利與名譽,更是各家客戶對於該公司的信任感與接受度,當生產線上的產品,發生品質異常,製程與設備工程師們須立即探討問題的癥結點,利用拆片解析與製程管理(SPC、製程點檢的方式)去找出方法,更需要進行問題在線性驗證,確認問題的癥結點是否單一問題,還是有複合性因子造成,這都是即時性補救的手法;仍然有些不良品放流至客戶端,就會產生客戶抱怨,對於公司的不滿,造成公司的損失。

本研究利用資料進行知識發掘(Knowledge Discovery from Data,KDD)[1]的觀念進行建立大數據分析的流程,其建立一套完整流程包含問題描述、資料前處理(整合、篩選、擷取),並利用統計學方式找尋製程的變異因子,利用機器學習(支援向量回歸(Support Vector Regression, SVR) /多變項線性迴歸(Multivariate linear regression, MLR))建立模型,並選擇適當的模型來預測其最佳化製程參數,將預測的最佳化參數導入,進行小樣本量產進行效果確認。
本研究於資料前處理、建模,而建模預測之良窳。
此實驗最找出最佳參數後,因為配合生產工廠的成本考量,故考慮次要參數,但其良率改善為66%。
Taiwan's flat panel display manufacturing is famous internationally. Its outstanding manufacturing capability can produce high quality panels in the shortest time. In the production process of flat panel displays, the process yield is the best indicator reflecting the good and bad of manufacturers. The yield is directly The decision is to the company's profit and reputation, but also the trust and acceptance of the company's customers. When the products on the production line are abnormal in quality, the process and equipment engineers must immediately discuss the crux of the problem and use it. Decomposition analysis and process management to find out the method, more need to carry out the problem in linear verification, to confirm whether the problem is a single problem, it will produce customer complaints, dissatisfaction with the company, causing the company's losses.Therefore, using Knowledge Discovery from Data (KDD) [1] to establish a process of big data analysis, and quickly find the variation factor, coupled with machine learning, so that the research is based on data pre-processing, modeling, and modeling Liangzhu.After the experiment finds the best parameters, it considers the secondary parameters because of the cost considerations of the factory.The yield improvement was 66%.
摘要 I
Abstract II
誌謝 VII
目錄 IIX
圖目錄 X
表目錄 X
1. 緒論 XI
1.1 研究背景 XIII
1.2研究動機 2
1.3研究目的 3
1.4 論文架構 3
2. 相關研究 4
2.1 何謂大數據 4
2.2 大數據分析的基礎 4
2.3 大數據分析的步驟 6
2.4資料探勘種類與工具 8
2.5多元回歸分析預測法 10
2.6支援向量回歸分析預測法 11
2.7工業化4.0 16
3. 研究方法 19
3.1個案製程概述 19
3.2平板顯示器於Cell製程的缺陷探討 23
3.3研究架構 24
4. 實驗與驗證 35
4.1自變數與依變數的線性關係及相關係數 35
4.2多變項線性迴歸(Multivariate linear regression, MLR)機器學習 42
4.3支援向量回歸(Support Vector Regression, SVR) 機器學習 44
4.4評價測度與驗證 46
5. 結論與未來展望 50
參考文獻 51
[1]Peter Braun, Alfredo Cuzzocrea , Carson K. Leung, Adam G.M. Pazdor, Kimberly Tran, Knowledge Discovery from Social Graph Data, Procedia Computer Science, 2016, vol. 96, pp. 682-691.
[2]Shewhart, Walter Andrew. (1931). Economic control of quality of manufactured product. New York: D. Van Nostrand Company. pp. 501
[3]Frederick Taylor, The Principles of Scientific Management (Harper & Brothers, 1911), p. 7
[4]Douglas, Laney. 3D Data Management: Controlling Data Volume, Velocity and Variety Gartner. [2001-02-06].
[5]Express Scripts (Chief Data Officer, CDO)speech in Big Data Innovation Summit[2013]
[6]Han, Kamber, Pei, Jaiwei, Micheline, Jian (June 9, 2011). Data Mining: Concepts and Techniques (3rd ed.). Morgan Kaufmann. ISBN 978-0-12-381479-1.
[7]Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). From Data Mining to Knowledge Discovery in Databases
[8]Berry, M. J. and Linoff, G. (1997). Data Mining Techniques: for Marketing, Sales,and Customer Support. John Wiley and Sons, New York.
[9]Stuart J. Russell, Peter Norvig (2010) Artificial Intelligence: A Modern Approach, Third Edition, Prentice Hall ISBN 9780136042594.
[10]Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258.
[11]Hinton, Jeffrey; Sejnowski, Terrence (1999). Unsupervised Learning: Foundations of Neural Computation. MIT Press. ISBN 978-0262581684.
[12]Jordan, Michael I.; Bishop, Christopher M. (2004). Neural Networks. In Allen B. Tucker (ed.). Computer Science Handbook, Second Edition (Section VII: Intelligent Systems). Boca Raton, Florida: Chapman & Hall/CRC Press LLC. ISBN 1-58488-360-X.
[13]Basic Business Statistics- Mark L. Berenson, David M. Levine, Timothy C. Krehbiel
[14] Andrew, D.F., 1974. A robust method for multiple linear regression. Technometrics, 16: 523-551.
[15] Rencher, Alvin C.; Christensen, William F., Chapter 10, Multivariate regression – Section 10.1, Introduction, Methods of Multivariate Analysis, Wiley Series in Probability and Statistics 709 3rd, John Wiley & Sons: 19, 2012, ISBN 9781118391679.
[16]Hilary L. Seal. The historical development of the Gauss linear model. Biometrika. 1967
[17]Corts, C., Vapnik, V., 1995, Support vector networks. Machine Learning, 20(3):
273-297.
[18]Cristianini, N. and Shawe-Taylor, J., 2000, An Introduction to Support Vector Machines and Other Kernel–based Learning Methods, Cambridge University Press.
[19] Gunn, S. R. (1998). Support vector machines for classification and regression. ISIS Technical Report, 14.
[20] Vladimir N. Vapnik. The Nature of Statistical Learning Theory. New York: Springer, 1995.
[21]Karush W., 1939, Minima of functions of several variables with inequalities as side constraints, Department of Mathematics, University of Chicago.
[22]Kuhn H. and Tucker A., 1951, A nonlinear programming in: Proceedings of 2nd Berkeley symposium on mathematical statistics and probabilistic, University of California Press. 481-492.
[23]Fletcher, R., 1987, Practical Method of Optimization, John Wiley and Sons. Inc.
[24]Smola, A., & Vapnik, V. (1997). Support vector regression machines. Advances in Neural Information Processing Systems, 9, 155-161.
[25] Vladimir Vapnik , Steven E. Golowich , Alex Smola. Support Vector Method for Function Approximation, Regression Estimation, and
Signal Processing. Neural Information Processing Systems Conference, 1997.
[26] Alex J. Smola and Bernhand Sch ̈lkopf. A tutorial on support vector regression. Statistics and Computing: Springer, 2003.
[27]Muller, K. R., Smola, A. J., Ratsch, G., Scholkopf, B., Kohlmorgen, J., & Vapnik, V. (1999). Using support vector machines for time series prediction. Advances in kernel methods—support vector learning, MIT Press, Cambridge, MA, 243-254.
[28]Cherkassky, V., & Ma, Y. (2004). Practical selection of SVM parameters and noise estimation for SVM regression. Neural networks, 17(1), 113-126.
[29]Keethi, S. S., Lin, C. J., 2003, Asymptotic behaviors of support vector machine with Gaussian kennel. Neural Computation, 15(7): 1667-1689
[30] Draper, N. R.; Smith, H. Applied Regression Analysis. Wiley-Interscience. 1998. ISBN 0-471-17082-8.
[31] Devore, Jay L. Probability and Statistics for Engineering and the Sciences 8th. Boston, MA: Cengage Learning. 2011: 508–510. ISBN 0-538-73352-7.
[32]王怡惠(2015)。〈從工業 4.0 看我國生產力 4.0 之挑戰〉,《臺灣經濟研究月刊》,38(8):111-119。
[33].林顯明(2015),中華經濟研究院,各國工業4.0策略與發展:對台灣的機會與挑戰,國立中山大學政治學研究所,取自:
http://web.wtocenter.org.tw/Mobile/page.aspx?pid=267863&nid=126。
[34]簡禎富、林國義、許鉅秉與吳政鴻(2016),「回顧與前瞻: 從工業 3.0 到工業 3.5」,管理學報,第 33 卷,第 1 期, 頁 87-103,台北,台灣
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊