跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.84) 您好!臺灣時間:2024/12/05 02:21
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:謝明恩
研究生(外文):Hsieh, Ming-En
論文名稱:運用任務目標重組強化多目標深度學習及其於心電圖疾病辨識之應用
論文名稱(外文):Boosting Multi-task Learning Through Combination of Task Labels - with Applications in ECG Phenotyping
指導教授:曾新穆曾新穆引用關係
指導教授(外文):Tseng, Shin-Mu
口試委員:楊智傑吳毅成洪瑞鴻曾新穆
口試委員(外文):Yang, Chih-ChiehWu, I-ChenHung, Jui-HungTseng, Shin-Mu
口試日期:2020-07-28
學位類別:碩士
校院名稱:國立交通大學
系所名稱:數據科學與工程研究所
學門:電算機學門
學類:軟體發展學類
論文種類:學術論文
論文出版年:2020
畢業學年度:108
語文別:英文
論文頁數:62
中文關鍵詞:多目標學習輔助任務不平衡資料深度學習心電圖疾病預測
外文關鍵詞:Multi-task LearningAuxiliary TaskImbalanced DataDeep LearningElectrocardiogram Phenotyping
相關次數:
  • 被引用被引用:0
  • 點閱點閱:309
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
隨著深度學習的普遍使用,多目標學習在近年來受到不小的關注,利用同時訓練多種不同訓練目標的方式,多目標學習模型可以藉由不同訓練目標間的遷移學習提升目標任務的準確度。透過多目標學習,我們可以使用單一模型執行多個任務,使其在嵌入式系統或其他對執行速度、能耗有要求的環境有許多好處。同時,在生醫資料領域,取得大量乾淨的標註資料成本十分昂貴,其標註也常帶有少量的標註雜訊。這導致在生醫資料領域中,透過建立多個不同目標的多目標學習方式較不容易被使用。在本研究中,我們提出一個基於任務目標重組的多目標深度學習方法,CO-TASK (COmbination of TASK labels),藉由重新組合原先的任務目標產生輔助任務。此方法不需要額外的標註資料,可以容忍一定程度的標註雜訊,並可以與許多其他多目標學習方法或模型相互搭配。

此方法使得多目標學習模型在 CIFAR-MTL 資料集中平均每個任務目標的相對準確度提升翻倍,從 4.38% 提升為 9.78%。當我們將此方法配合所提出的目標自覺不平衡資料取樣器,我們可以解決常見於疾病辨識資料集中,不同目標間有不同不平衡比例的問題。在擁有 18 個疾病的多標籤心電圖疾病辨識資料集 ECG-P18 中,本研究所提出的方法將病患平均 Jaccard 指數提升了 2.25%。而在利用心電圖推測心臟超音波診斷的資料集 ECG-EchoLVH 中,透過帶有部分標註雜訊的次要任務及 CO-TASK 方法配合,與原先的預測模型相比,我們可以在擁有與原先醫生特異度相同的狀況下將靈敏度提升 7.1%。
Multi-task learning has increased in importance due to its superior performance by learning multiple different tasks simultaneously and its ability to perform multiple different tasks using a single model. In medical phenotyping, task labels are costly to acquire and might contain a certain degree of label noise. This decreases the efficiency of constructing auxiliary tasks for applying multi-task learning to medical phenotyping. In this research, we focus on providing an effective multi-task learning framework, CO-TASK, to boost multi-task learning performance through COmbination of TASK Labels and can be applied in parallel with a variety of multi-task learning techniques.

The proposed CO-TASK framework aims to generate combinations of task labels to improve performance on targeted tasks without additional labeling effort and is robust to a certain degree of label noise. On the CIFAR-MTL dataset, we doubled the average per-task performance gain of the multi-task learning model from 4.38% to 9.78%. When combined with the proposed task-aware imbalance data sampler, the CO-TASK framework can effectively deal with the different imbalance ratios for the different tasks in electrocardiogram phenotyping datasets. On the 18 diseases multi-label ECG-P18 dataset, we increased the average Jaccard metric by 2.25%. On the echocardiogram diagnostic from electrocardiogram dataset, ECG-EchoLVH, the proposed framework combined with noisy annotations as minor tasks increased the sensitivity by 7.1% compared to the single-task model while maintaining the same specificity as the doctor annotations.
1 Introduction....................................................................1
1.1 Background and Motivation.......................................................1
1.2 Research Aims and Challenges....................................................4
1.3 Contribution....................................................................5
1.4 Thesis Organization.............................................................6
2 Related Work....................................................................7
2.1 Multitask Learning..............................................................7
2.1.1 Task Weight Balancing...........................................................9
2.1.2 Model Architecture Innovations.................................................11
2.1.3 Auxiliary Tasks in Multi-task Learning.........................................12
2.2 Multi-task Learning in Medical Domain..........................................13
2.3 Electrocardiogram Phenotyping..................................................15
2.3.1 Deep Learning in Electrocardiogram Phenotyping.................................15
2.3.2 Echocardiogram Diagnostic from Electrocardiogram...............................15
2.4 Imbalanced data................................................................16
3 Proposed Method................................................................19
3.1 Problem Definition.............................................................19
3.1.1 Multitask Learning.............................................................19
3.1.2 Electrocardiogram Phenotyping..................................................21
3.2 Proposed Framework.............................................................22
3.2.1 Intuition for CO-TASK Framework................................................22
3.2.2 Auxiliary Task Generation......................................................23
3.2.3 Multi-task Learning Model Training.............................................27
3.2.4 Task-Aware Imbalance Data Sampler..............................................29
4 Experiments and Evaluations....................................................31
4.1 Experiments Settings...........................................................31
4.1.1 Environment....................................................................31
4.1.2 Dataset Description............................................................32
4.1.3 Data Pre-processing............................................................36
4.1.4 Evaluation Metrics.............................................................36
4.1.5 Baseline Models and Details of Training Procedure..............................39
4.2 Experiments Results............................................................41
4.2.1 Performance on CIFAR-MTL.......................................................42
4.2.2 CIFAR-MTL with Label Noise.....................................................44
4.2.3 CIFAR-MTL in Lower Data Scenarios..............................................45
4.2.4 Performance of Electrocardiogram Phenotyping...................................46
4.2.5 Performance of Predicting Echocardiogram Diagnostic from Electrocardiogram.....47
4.2.6 Discussions....................................................................49
5 Conclusion and Future Works....................................................51
5.1 Conclusion.....................................................................51
5.2 Future Works...................................................................52
References...........................................................................54
[1] Zachi I Attia, Suraj Kapa, Francisco Lopez-Jimenez, Paul M McKie, Dorothy J Ladewig, Gaurav Satam, Patricia A Pellikka, Maurice Enriquez-Sarano, Peter A Noseworthy, Thomas M Munger, et al. “Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram”. In: Nature medicine 25.1 (2019), pp. 70–74.
[2] Zachi I Attia, Peter A Noseworthy, Francisco Lopez-Jimenez, Samuel J Asirvatham, Abhishek J Deshmukh, Bernard J Gersh, Rickey E Carter, Xiaoxi Yao, Alejandro A Rabinstein, Brad J Erickson, et al. “An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction”. In: The Lancet 394.10201 (2019), pp. 861–867.
[3] Ghalib A Bello, Timothy JW Dawes, Jinming Duan, Carlo Biffi, Antonio De Marvao, Luke SGE Howard, J Simon R Gibbs, Martin R Wilkins, Stuart A Cook, Daniel Rueckert, et al. “Deep-learning cardiac motion analysis for human survival prediction”. In: Nature machine intelligence 1.2 (2019), pp. 95–104.
[4] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. “Deep clustering for unsupervised learning of visual features”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018, pp. 132–149.
[5] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. “Unsupervised pre-training of image features on non-curated data”. In: Proceedings of the IEEE International Conference on Computer Vision. 2019, pp. 2959–2968.
[6] Carlos Carreiras, Ana Priscila Alves, André Lourenço, Filipe Canento, Hugo Silva, Ana Fred, et al. BioSPPy: Biosignal Processing in Python. 2015–. url: https://github.com/PIA-Group/BioSPPy/.
[7] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. “Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks”. In: arXiv preprint arXiv:1711.02257 (2017).
[8] Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, and Samir A Rawashdeh. “Multinet++: Multi-stream feature aggregation and geometric loss strategy for multi-task learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019.
[9] Razvan-Gabriel Cirstea, Darius-Valer Micu, Gabriel-Marcel Muresan, Chenjuan Guo, and Bin Yang. “Correlated time series forecasting using multi-task deep neural networks”. In: Proceedings of the 27th acm international conference on information and knowledge management. 2018, pp. 1527–1530.
[10] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. “Class-balanced loss based on effective number of samples”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 9268–9277.
[11] Daisy Yi Ding, Chloé Simpson, Stephen Pfohl, Dave C Kale, Kenneth Jung, and Nigam H Shah. “The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data.” In: PSB. World Scientific. 2019, pp. 18–29.
[12] Conner D Galloway, Alexander V Valys, Jacqueline B Shreibati, Daniel L Treiman, Frank L Petterson, Vivek P Gundotra, David E Albert, Zachi I Attia, Rickey E Carter, Samuel J Asirvatham, et al. “Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram”. In: JAMA cardiology 4.5 (2019), pp. 428–436.
[13] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. “Unsupervised representation learning by predicting image rotations”. In: arXiv preprint arXiv:1803.07728 (2018).
[14] Ting Gong, Tyler Lee, Cory Stephenson, Venkata Renduchintala, Suchismita Padhy, Anthony Ndirango, Gokce Keskin, and Oguz H Elibol. “A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks”. In: IEEE Access 7 (2019), pp. 141627–141632.
[15] Sebastian Guendel, Florin C Ghesu, Sasa Grbic, Eli Gibson, Bogdan Georgescu, Andreas Maier, and Dorin Comaniciu. “Multi-task Learning for Chest X-ray Abnormality Classification on Noisy Labels”. In: arXiv preprint arXiv:1905.06362 (2019).
[16] Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. “Dynamic task prioritization for multitask learning”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018, pp. 270–287.
[17] Awni Y Hannun, Pranav Rajpurkar, Masoumeh Haghpanahi, Geoffrey H Tison, Codie Bourn, Mintu P Turakhia, and Andrew Y Ng. “Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network”. In: Nature medicine 25.1 (2019), p. 65.
[18] Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. “Multitask learning and benchmarking with clinical time series data”. In: Scientific data 6.1 (2019), pp. 1–18.
[19] Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. “A joint many-task model: Growing a neural network for multiple nlp tasks”. In: arXiv preprint arXiv:1611.01587 (2016).
[20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 770–778.
[21] Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-Wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. “MIMIC-III, a freely accessible critical care database”. In: Scientific data 3.1 (2016), pp. 1–9.
[22] J Kameenoff. “Signal Processing Techniques for Removing Noise from ECG Signals”. In: Biomedical Engineering and Research 1.1 (2017), p. 1.
[23] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. “Decoupling representation and classifier for longtailed recognition”. In: arXiv preprint arXiv:1910.09217 (2019).
[24] Alex Kendall, Yarin Gal, and Roberto Cipolla. “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, pp. 7482–7491.
[25] Diederik P Kingma and Jimmy Ba. “Adam: A method for stochastic optimization”. In: arXiv preprint arXiv:1412.6980 (2014).
[26] Alex Krizhevsky, Geoffrey Hinton, et al. “Learning multiple layers of features from tiny images”. In: (2009). url: https://www.cs.toronto.edu/~kriz/learningfeatures-2009-TR.pdf.
[27] Joon-Myoung Kwon, Ki-Hyun Jeon, Hyue Mee Kim, Min Jeong Kim, Sung Min Lim, Kyung-Hee Kim, Pil Sang Song, Jinsik Park, Rak Kyeong Choi, and Byung-Hee Oh. “Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography”. In: EP Europace 22.3 (2020), pp. 412–419.
[28] Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. “Rethinking Data Augmentation: Self-Supervision and Self-Distillation”. In: arXiv preprint arXiv:1910.05872 (2019).
[29] Ming Liang, Bin Yang, Yun Chen, Rui Hu, and Raquel Urtasun. “Multi-task multisensor fusion for 3d object detection”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 7345–7353.
[30] Lukas Liebel and Marco Körner. “Auxiliary tasks in multi-task learning”. In: arXiv preprint arXiv:1805.06334 (2018).
[31] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. “Focal loss for dense object detection”. In: Proceedings of the IEEE international conference on computer vision. 2017, pp. 2980–2988.
[32] Shikun Liu, Andrew Davison, and Edward Johns. “Self-supervised generalisation with meta auxiliary learning”. In: Advances in Neural Information Processing Systems. 2019, pp. 1677–1687.
[33] Shikun Liu, Edward Johns, and Andrew J Davison. “End-to-end multi-task learning with attention”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 1871–1880.
[34] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. “Multi-task deep neural networks for natural language understanding”. In: arXiv preprint arXiv:1901.11504 (2019).
[35] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. “Large-scale long-tailed recognition in an open world”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 2537–2546.
[36] Jiaqi Ma, Zhe Zhao, Jilin Chen, Ang Li, Lichan Hong, and Ed H Chi. “SNR: SubNetwork Routing for Flexible Parameter Sharing in Multi-task Learning”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 216–223.
[37] Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. “Modeling task relationships in multi-task learning with multi-gate mixture-of-experts”. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, pp. 1930–1939.
[38] Kevis-Kokitsi Maninis, Ilija Radosavovic, and Iasonas Kokkinos. “Attentive singletasking of multiple tasks”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 1851–1860.
[39] MartinThoma. Receiver Operating Characteristic (ROC) curve with False Positive Rate and True Positive Rate. 2018. url: https://commons.wikimedia.org/wiki/ File:Roc-draft-xkcd-style.svg.
[40] Elliot Meyerson and Risto Miikkulainen. “Beyond shared hierarchies: Deep multitask learning through soft layer ordering”. In: arXiv preprint arXiv:1711.00108 (2017).
[41] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. “Crossstitch networks for multi-task learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 3994–4003.
[42] George B Moody and Roger G Mark. “The impact of the MIT-BIH arrhythmia database”. In: IEEE Engineering in Medicine and Biology Magazine 20.3 (2001), pp. 45–50.
[43] Sajad Mousavi and Fatemeh Afghah. “Inter-and intra-patient ecg heartbeat classification for arrhythmia detection: a sequence to sequence deep learning approach”. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. 2019, pp. 1308–1312.
[44] Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, and Hamed Pirsiavash. “Boosting self-supervised learning via knowledge transfer”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 9359–9367.
[45] Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. “Learning and transferring mid-level image representations using convolutional neural networks”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014, pp. 1717–1724.
[46] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. “PyTorch: An Imperative Style, High-Performance Deep Learning Library”. In: Advances in Neural Information Processing Systems 32. Ed. by H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett. Curran Associates, Inc., 2019, pp. 8024–8035. url: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deeplearning-library.pdf.
[47] Tilman Piesk. The 52 partitions of a 5-element set. url: https://en.wikipedia.org/wiki/File:Set_partitions_5;_circles.svg.
[48] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. “Massively multitask networks for drug discovery”. In: arXiv preprint arXiv:1502.02072 (2015).
[49] Alex Ratner, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher Ré. “Snorkel metal: Weak supervision for multi-task learning”. In: Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning. 2018, pp. 1–4.
[50] Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher Ré. “Training complex models with multi-task weak supervision”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 4763–4771.
[51] Ralf Raumanns, Elif K Contar, Gerard Schouten, and Veronika Cheplygina. “Multitask Learning with Crowdsourced Features Improves Skin Lesion Diagnosis”. In: arXiv preprint arXiv:2004.14745 (2020).
[52] Sachin Ravi and Hugo Larochelle. “Optimization as a model for few-shot learning”. In: International Conference on Learning Representations. 2017. url: https://openreview.net/pdf?id=rJY0-Kcll.
[53] Narges Razavian, Jake Marcus, and David Sontag. “Multi-task prediction of disease onsets from longitudinal laboratory tests”. In: Machine Learning for Healthcare Conference. 2016, pp. 73–100.
[54] Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. “Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning”. In: International Conference on Learning Representations. 2018. url: https://openreview.net/forum?id=ry8dvM-R-.
[55] Sebastian Ruder. “An overview of multi-task learning in deep neural networks”. In: arXiv preprint arXiv:1706.05098 (2017).
[56] Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. “Latent multi-task architecture learning”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 4822–4829.
[57] Huan Song, Deepta Rajan, Jayaraman J Thiagarajan, and Andreas Spanias. “Attend and diagnose: Clinical time series analysis using attention models”. In: Thirtysecond AAAI conference on artificial intelligence. 2018.
[58] Trevor Standley, Amir R Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. “Which Tasks Should Be Learned Together in Multi-task Learning?” In: arXiv preprint arXiv:1905.07553 (2019).
[59] Gjorgji Strezoski, Nanne van Noord, and Marcel Worring. “Many task learning with task routing”. In: Proceedings of the IEEE International Conference on Computer Vision. 2019, pp. 1375–1384.
[60] Harini Suresh, Jen J Gong, and John V Guttag. “Learning tasks for multitask learning: Heterogenous patient populations in the icu”. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, pp. 802–810.
[61] Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, and Junjie Yan. “Equalization Loss for Long-Tailed Object Recognition”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 11662–11671.
[62] David Tellez, Diederik Hoppener, Cornelis Verhoef, Dirk Grunhagen, Pieter Nierop, Michal Drozdzal, Jeroen van der Laak, and Francesco Ciompi. “Extending Unsupervised Neural Image Compression With Supervised Multitask Learning”. In: arXiv preprint arXiv:2004.07041 (2020).
[63] Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Dengxin Dai, and Luc Van Gool. “Revisiting Multi-Task Learning in the Deep Learning Era”. In: arXiv preprint arXiv:2004.13379 (2020).
[64] Aswathy Velayudhan and Soniya Peter. “Noise Analysis and Different Denoising Techniques of ECG Signal-A Survey”. In: IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) (2016), eISSN–2278.
[65] Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime Carbonell. “Characterizing and avoiding negative transfer”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 11293–11302.
[66] Eric W Weisstein. “Bell Number”. In: MathWorld–A Wolfram Web Resource. (2002). url: https://mathworld.wolfram.com/BellNumber.html.
[67] Dan Xu, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe. “Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 675–684.
[68] Xueting Yan, Ishan Misra, Abhinav Gupta, Deepti Ghadiyaram, and Dhruv Mahajan. “ClusterFit: Improving Generalization of Visual Representations”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 6509–6518.
[69] Cheng-Han Yeh, Yao-Chung Fan, and Wen-Chih Peng. “Interpretable Multi-task Learning for Product Quality Prediction with Attention Mechanism”. In: 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE. 2019, pp. 1910–1921.
[70] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. “How transferable are features in deep neural networks?” In: Advances in neural information processing systems. 2014, pp. 3320–3328.
[71] Ruoxi Yu, Yali Zheng, Ruikai Zhang, Yuqi Jiang, and Carmen CY Poon. “Using a multi-task recurrent neural network with attention mechanisms to predict hospital mortality of patients”. In: IEEE journal of biomedical and health informatics (2019).
[72] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. “Taskonomy: Disentangling task transfer learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 3712–3722.
[73] Junjie Zhang, Lingqiao Liu, Peng Wang, and Chunhua Shen. “To Balance or Not to Balance: An Embarrassingly Simple Approach for Learning with Long-Tailed Distributions”. In: arXiv preprint arXiv:1912.04486 (2019).
[74] Zhenyu Zhang, Zhen Cui, Chunyan Xu, Yan Yan, Nicu Sebe, and Jian Yang. “Pattern-affinitive propagation across depth, surface normal and semantic segmentation”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 4106–4115.
[75] Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. “Recommending what video to watch next: a multitask ranking system”. In: Proceedings of the 13th ACM Conference on Recommender Systems. 2019, pp. 43–51.
[76] Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. “BBN: BilateralBranch Network with Cumulative Learning for Long-Tailed Visual Recognition”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 9719–9728.
電子全文 電子全文(網際網路公開日期:20250824)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊