資料載入處理中...
跳到主要內容
臺灣博碩士論文加值系統
:::
網站導覽
|
首頁
|
關於本站
|
聯絡我們
|
國圖首頁
|
常見問題
|
操作說明
English
|
FB 專頁
|
Mobile
免費會員
登入
|
註冊
切換版面粉紅色
切換版面綠色
切換版面橘色
切換版面淡藍色
切換版面黃色
切換版面藍色
功能切換導覽列
(216.73.216.172) 您好!臺灣時間:2025/09/12 03:10
字體大小:
字級大小SCRIPT,如您的瀏覽器不支援,IE6請利用鍵盤按住ALT鍵 + V → X → (G)最大(L)較大(M)中(S)較小(A)小,來選擇適合您的文字大小,如為IE7或Firefoxy瀏覽器則可利用鍵盤 Ctrl + (+)放大 (-)縮小來改變字型大小。
字體大小變更功能,需開啟瀏覽器的JAVASCRIPT功能
:::
詳目顯示
recordfocus
第 1 筆 / 共 1 筆
/1
頁
論文基本資料
摘要
外文摘要
目次
參考文獻
紙本論文
論文連結
QR Code
本論文永久網址
:
複製永久網址
Twitter
研究生:
楊凱崴
研究生(外文):
Yang, Kai-Wei
論文名稱:
基於度量之元學習於少樣本問題之研究
論文名稱(外文):
Novel Metric-based Meta Learning Algorithms for Few-shot Learning
指導教授:
劉建良
指導教授(外文):
Liu, Chien-Liang
口試委員:
陳勝一
、
巫木誠
口試委員(外文):
Chen, Sheng-I
、
Wu, Muh-Cherng
口試日期:
2019-08-05
學位類別:
碩士
校院名稱:
國立交通大學
系所名稱:
工業工程與管理系所
學門:
工程學門
學類:
工業工程學類
論文種類:
學術論文
論文出版年:
2019
畢業學年度:
108
語文別:
英文
論文頁數:
45
中文關鍵詞:
少樣本學習
、
元學習
、
度量學習
、
原型網絡
外文關鍵詞:
few-shot learning
、
meta learning
、
metric learning
、
prototypical networks
相關次數:
被引用:0
點閱:649
評分:
下載:0
書目收藏:0
深度學習技術已經被驗證在文字探勘、電腦視覺等領域比起傳統的機器學習方法能大幅提昇預測準確率,因為深度學習將特徵學習與模型預測整合在同一個網路架構內,透過深度類神經網路直接從資料中學習到抽象化的特徵表示法,可讓後續的分類器提升準確度。雖然深度學習已經被廣泛應用於各領域,但是深度學習需要大量的標記資料去訓練模型,因為深度學習大多透過深度類神經網路架構學習特徵表示法,模型參數數量非常龐大,當標記資料的樣本數少、受限時,深度學習很難練出好的模型做精準的預測,面臨模型過度匹配 (Overfitting) 問題,即模型對訓練資料(Training Data)過度匹配,但無法適用於預測資料(Testing Data)。 轉移學習 (Transfer Learning) 雖然可以稍微解決該問題,但是轉移學習需要 Source Domain 跟 Target Domain 具備某個程度的相關性,同時也需要Source Domain 具備足夠的訓練資料,才能用少量的 Target 訓練資料微調預先訓練的模型 (Pre-trained Model),在使用上還是受到非常大的侷限。少樣本學習(Few-shot Learning) 目前在深度學習領域受到相當多的矚目,因為不少實務上的應用情境下,標記資料非常稀少,難以取得,甚至只有一筆資料 (One-shot Learning)的特殊情形,這時一般的模型不適用於這類問題。度量學習(Metric Learning) 跟元學習(Meta Learning)是目前廣泛應用於少樣本問題的方法。本研究提出兩個新的基於度量的元學習演算法於少樣本問題,是從Prototypical Networks啟發而來。Prototypical Networks 是把資料做特徵萃取,投射到合適的向量空間後,將同類別的資料向量取平均得到群中心(Class-mean)作為該類別的``原型''(Prototype),再做後續訓練及分類等流程; 提出的其中一個方法是不以群中心代表各個類別,而是以每個"資料點"去跟預測的點計算相似度,另一個方法則是結合``原型''跟"資料點"兩種觀點來計算相似度,再做後續的訓練與預測。
Deep learning technology has been proven to significantly improve prediction accuracy in the areas of text mining, computer vision, and signal processing. Compared to traditional machine learning methods, deep learning integrates feature learning and model prediction into the same network architecture, making it possible to learn abstract feature representations directly from data. Deep learning requires a large amount of training data to train the model, since deep learning relies on deep neural networks to learn feature representation, and the number of model parameters is always enormous. Although transfer learning can deal with this problem slightly, it requires a certain degree of adaptation between source domain and target domain, and also requires sufficient training data from source domain to learn the pre-trained model. Few-shot Learning is another approach that could be used to deal with aforementioned problem, and has attracted much attention in recent years. This thesis proposes two novel metric-based meta-learning algorithms for few-shot learning problem. The proposed methods combine meta learning and metric learning, giving a base to learn a model that could deal with the problems where only a few data samples are available for each class, and could predict the data coming from unseen class. We will conduct experiments to compare with several alternatives to evaluate our proposed methods.
1 Introduction 1
1.1 Background and Motivation 1
1.2 Research Aims 2
2 Related Work 4
2.1 Introduction of Few-shot Classification 4
2.2 Deep Metric Learning 4
2.2.1 Siamese Network 5
2.2.2 Triplet Network 6
2.3 Meta Learning 8
2.3.1 General Architecture of Meta Learning 9
2.3.2 Metric-based Meta Learning Algorithms 12
3 Methodology 17
3.1 Specification of Meta Learning 17
3.2 Specification of Metric-based Meta Learning 18
3.3 The Proposed Methods 19
3.3.1 Algorithm 19
3.3.2 Connection to Prototypical Networks 21
3.4 Model Design 22
3.4.1 Episode Composition 22
3.4.2 Model Architecture 23
4 Experiments 27
4.1 Dataset 27
4.2 Evaluation Metric 27
4.3 Experimental Settings 28
4.4 Experimental Procedure 31
4.5 Experimental Results 32
5 Discussion 36
5.1 Comparison of ProtoNet, PointNet and PointProtoNet 36
5.2 Comparison of Different Prediction Methods 38
5.3 Effectiveness of Increasing the Number of Shot 41
6 Conclusions and Future Work 42
References 44
[1] Wei-Yu Chen et al. “A Closer Look at Few-shot Classification”. In: International Conference on Learning Representations. 2019.
[2] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. “Siamese neural networks for one-shot image recognition”. In: ICML Deep Learning Workshop. Vol. 2. 2015.
[3] Sumit Chopra, Raia Hadsell, and Yann LeCun. “Learning a similarity metric discriminatively, with application to face verifi cation”. In: null. IEEE. 2005, pp. 539–546.
[4] Elad Hoff er and Nir Ailon. “Deep metric learning using triplet network”. In: International Workshop on Similarity-Based Pattern Recognition. Springer. 2015, pp. 84–92.
[5] Oriol Vinyals et al. “Matching networks for one shot learning”. In: Advances in neural information processing systems. 2016, pp. 3630–3638.
[6] Jake Snell, Kevin Swersky, and Richard Zemel. “Prototypical networks for few-shot learning”. In: Advances in Neural Information Processing Systems. 2017, pp. 4077–4087.
[7] Flood Sung et al. “Learning to compare: Relation network for few-shot learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 1199–1208.
[8] Adam Santoro et al. “Meta-learning with memory augmented neural networks”. In: International conference on machine learning. 2016, pp. 1842–1850.
[9] Tsendsuren Munkhdalai and Hong Yu. “Meta networks”. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org. 2017, pp. 2554–2563.
[10] Sachin Ravi and Hugo Larochelle. “Optimization as a model for few-shot learning”. In: (2016).
[11] Chelsea Finn, Pieter Abbeel, and Sergey Levine. “Model-agnostic meta-learning for fast adaptation of deep networks”. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org. 2017, pp. 1126–1135.
[12] Lilian Weng. “Meta-Learning: Learning to Learn Fast”. In: (2018). url: https: //lilianweng.github.io/lil-log/2018/11/30/meta-learning.html.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoff rey E Hinton. “Imagenet classification with deep convolutional neural networks”. In: Advances in neural information pro-
cessing systems. 2012, pp. 1097–1105.
[14] Sergey Ioff e and Christian Szegedy. “Batch normalization: Accelerating deep network training by reducing internal covariate shift”. In: arXiv preprint arXiv:1502.03167 (2015).
[15] Ronald A Fisher. “The use of multiple measurements in taxonomic problems”. In: Annals of eugenics 7.2 (1936), pp. 179–188.
[16] Olga Russakovsky et al. “Imagenet large scale visual recognition challenge”. In: International journal of computer vision 115.3 (2015), pp. 211–252.
國圖紙本論文
連結至畢業學校之論文網頁
點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
推文
當script無法執行時可按︰
推文
網路書籤
當script無法執行時可按︰
網路書籤
推薦
當script無法執行時可按︰
推薦
評分
當script無法執行時可按︰
評分
引用網址
當script無法執行時可按︰
引用網址
轉寄
當script無法執行時可按︰
轉寄
top
相關論文
相關期刊
熱門點閱論文
1.
類 Dropout 的模型擴增方法於給定象牙塔經驗之跨領域少樣本學習
2.
應用於小樣本影像辨識之輸入適應之度量學習與具區別性特徵選取
3.
基於主動學習的高效率關聯式少樣本學習
無相關期刊
1.
基於度量學習的半監督式迴歸
2.
基於生成對抗模仿學習求解零工式排程
3.
基於三元網絡之半監督學習模型
4.
基於多任務學習的多目標分類問題 - 以自然語言處理問題為例
5.
基於心電圖訊號進行自動化準確判別心房撲動及心房顫動
6.
基於主動學習的高效率關聯式少樣本學習
7.
應用於60GHz頻帶10Gbps單一載波基頻之脈波成形和同相與正交分量不平衡及直流偏壓準位偏移聯合補償設計
8.
以 WiGig 波束組合為指紋的無人機室內定位技術
9.
影像灘線輔助灘線理論評估烏石港北側沙灘突堤效應
10.
在蜂巢網路中,基於軟體定義網路和優先排程演 算法之低延遲邊緣計算平台
11.
頻寬交易系統之傳輸量驗證與信譽維護
12.
吲哚酚的鈀金屬催化α-芳基化反應
13.
直流無刷馬達弦波電流驅動系統之霍爾位置感測元件容錯控制
14.
網絡流量分佈的分類和估計及其應用
15.
離子性銥金屬錯合物高效率飽和紅光固態發光電化學元件
簡易查詢
|
進階查詢
|
熱門排行
|
我的研究室