|
[1] Zachi I Attia, Suraj Kapa, Francisco Lopez-Jimenez, Paul M McKie, Dorothy J Ladewig, Gaurav Satam, Patricia A Pellikka, Maurice Enriquez-Sarano, Peter A Noseworthy, Thomas M Munger, et al. “Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram”. In: Nature medicine 25.1 (2019), pp. 70–74. [2] Zachi I Attia, Peter A Noseworthy, Francisco Lopez-Jimenez, Samuel J Asirvatham, Abhishek J Deshmukh, Bernard J Gersh, Rickey E Carter, Xiaoxi Yao, Alejandro A Rabinstein, Brad J Erickson, et al. “An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction”. In: The Lancet 394.10201 (2019), pp. 861–867. [3] Ghalib A Bello, Timothy JW Dawes, Jinming Duan, Carlo Biffi, Antonio De Marvao, Luke SGE Howard, J Simon R Gibbs, Martin R Wilkins, Stuart A Cook, Daniel Rueckert, et al. “Deep-learning cardiac motion analysis for human survival prediction”. In: Nature machine intelligence 1.2 (2019), pp. 95–104. [4] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. “Deep clustering for unsupervised learning of visual features”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018, pp. 132–149. [5] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Armand Joulin. “Unsupervised pre-training of image features on non-curated data”. In: Proceedings of the IEEE International Conference on Computer Vision. 2019, pp. 2959–2968. [6] Carlos Carreiras, Ana Priscila Alves, André Lourenço, Filipe Canento, Hugo Silva, Ana Fred, et al. BioSPPy: Biosignal Processing in Python. 2015–. url: https://github.com/PIA-Group/BioSPPy/. [7] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. “Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks”. In: arXiv preprint arXiv:1711.02257 (2017). [8] Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, and Samir A Rawashdeh. “Multinet++: Multi-stream feature aggregation and geometric loss strategy for multi-task learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019. [9] Razvan-Gabriel Cirstea, Darius-Valer Micu, Gabriel-Marcel Muresan, Chenjuan Guo, and Bin Yang. “Correlated time series forecasting using multi-task deep neural networks”. In: Proceedings of the 27th acm international conference on information and knowledge management. 2018, pp. 1527–1530. [10] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. “Class-balanced loss based on effective number of samples”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 9268–9277. [11] Daisy Yi Ding, Chloé Simpson, Stephen Pfohl, Dave C Kale, Kenneth Jung, and Nigam H Shah. “The Effectiveness of Multitask Learning for Phenotyping with Electronic Health Records Data.” In: PSB. World Scientific. 2019, pp. 18–29. [12] Conner D Galloway, Alexander V Valys, Jacqueline B Shreibati, Daniel L Treiman, Frank L Petterson, Vivek P Gundotra, David E Albert, Zachi I Attia, Rickey E Carter, Samuel J Asirvatham, et al. “Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram”. In: JAMA cardiology 4.5 (2019), pp. 428–436. [13] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. “Unsupervised representation learning by predicting image rotations”. In: arXiv preprint arXiv:1803.07728 (2018). [14] Ting Gong, Tyler Lee, Cory Stephenson, Venkata Renduchintala, Suchismita Padhy, Anthony Ndirango, Gokce Keskin, and Oguz H Elibol. “A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks”. In: IEEE Access 7 (2019), pp. 141627–141632. [15] Sebastian Guendel, Florin C Ghesu, Sasa Grbic, Eli Gibson, Bogdan Georgescu, Andreas Maier, and Dorin Comaniciu. “Multi-task Learning for Chest X-ray Abnormality Classification on Noisy Labels”. In: arXiv preprint arXiv:1905.06362 (2019). [16] Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. “Dynamic task prioritization for multitask learning”. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018, pp. 270–287. [17] Awni Y Hannun, Pranav Rajpurkar, Masoumeh Haghpanahi, Geoffrey H Tison, Codie Bourn, Mintu P Turakhia, and Andrew Y Ng. “Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network”. In: Nature medicine 25.1 (2019), p. 65. [18] Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. “Multitask learning and benchmarking with clinical time series data”. In: Scientific data 6.1 (2019), pp. 1–18. [19] Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. “A joint many-task model: Growing a neural network for multiple nlp tasks”. In: arXiv preprint arXiv:1611.01587 (2016). [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 770–778. [21] Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-Wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. “MIMIC-III, a freely accessible critical care database”. In: Scientific data 3.1 (2016), pp. 1–9. [22] J Kameenoff. “Signal Processing Techniques for Removing Noise from ECG Signals”. In: Biomedical Engineering and Research 1.1 (2017), p. 1. [23] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. “Decoupling representation and classifier for longtailed recognition”. In: arXiv preprint arXiv:1910.09217 (2019). [24] Alex Kendall, Yarin Gal, and Roberto Cipolla. “Multi-task learning using uncertainty to weigh losses for scene geometry and semantics”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, pp. 7482–7491. [25] Diederik P Kingma and Jimmy Ba. “Adam: A method for stochastic optimization”. In: arXiv preprint arXiv:1412.6980 (2014). [26] Alex Krizhevsky, Geoffrey Hinton, et al. “Learning multiple layers of features from tiny images”. In: (2009). url: https://www.cs.toronto.edu/~kriz/learningfeatures-2009-TR.pdf. [27] Joon-Myoung Kwon, Ki-Hyun Jeon, Hyue Mee Kim, Min Jeong Kim, Sung Min Lim, Kyung-Hee Kim, Pil Sang Song, Jinsik Park, Rak Kyeong Choi, and Byung-Hee Oh. “Comparing the performance of artificial intelligence and conventional diagnosis criteria for detecting left ventricular hypertrophy using electrocardiography”. In: EP Europace 22.3 (2020), pp. 412–419. [28] Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. “Rethinking Data Augmentation: Self-Supervision and Self-Distillation”. In: arXiv preprint arXiv:1910.05872 (2019). [29] Ming Liang, Bin Yang, Yun Chen, Rui Hu, and Raquel Urtasun. “Multi-task multisensor fusion for 3d object detection”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 7345–7353. [30] Lukas Liebel and Marco Körner. “Auxiliary tasks in multi-task learning”. In: arXiv preprint arXiv:1805.06334 (2018). [31] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. “Focal loss for dense object detection”. In: Proceedings of the IEEE international conference on computer vision. 2017, pp. 2980–2988. [32] Shikun Liu, Andrew Davison, and Edward Johns. “Self-supervised generalisation with meta auxiliary learning”. In: Advances in Neural Information Processing Systems. 2019, pp. 1677–1687. [33] Shikun Liu, Edward Johns, and Andrew J Davison. “End-to-end multi-task learning with attention”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 1871–1880. [34] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. “Multi-task deep neural networks for natural language understanding”. In: arXiv preprint arXiv:1901.11504 (2019). [35] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. “Large-scale long-tailed recognition in an open world”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 2537–2546. [36] Jiaqi Ma, Zhe Zhao, Jilin Chen, Ang Li, Lichan Hong, and Ed H Chi. “SNR: SubNetwork Routing for Flexible Parameter Sharing in Multi-task Learning”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 216–223. [37] Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. “Modeling task relationships in multi-task learning with multi-gate mixture-of-experts”. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, pp. 1930–1939. [38] Kevis-Kokitsi Maninis, Ilija Radosavovic, and Iasonas Kokkinos. “Attentive singletasking of multiple tasks”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 1851–1860. [39] MartinThoma. Receiver Operating Characteristic (ROC) curve with False Positive Rate and True Positive Rate. 2018. url: https://commons.wikimedia.org/wiki/ File:Roc-draft-xkcd-style.svg. [40] Elliot Meyerson and Risto Miikkulainen. “Beyond shared hierarchies: Deep multitask learning through soft layer ordering”. In: arXiv preprint arXiv:1711.00108 (2017). [41] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. “Crossstitch networks for multi-task learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, pp. 3994–4003. [42] George B Moody and Roger G Mark. “The impact of the MIT-BIH arrhythmia database”. In: IEEE Engineering in Medicine and Biology Magazine 20.3 (2001), pp. 45–50. [43] Sajad Mousavi and Fatemeh Afghah. “Inter-and intra-patient ecg heartbeat classification for arrhythmia detection: a sequence to sequence deep learning approach”. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. 2019, pp. 1308–1312. [44] Mehdi Noroozi, Ananth Vinjimoor, Paolo Favaro, and Hamed Pirsiavash. “Boosting self-supervised learning via knowledge transfer”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 9359–9367. [45] Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. “Learning and transferring mid-level image representations using convolutional neural networks”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014, pp. 1717–1724. [46] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. “PyTorch: An Imperative Style, High-Performance Deep Learning Library”. In: Advances in Neural Information Processing Systems 32. Ed. by H. Wallach, H. Larochelle, A. Beygelzimer, F. d' Alché-Buc, E. Fox, and R. Garnett. Curran Associates, Inc., 2019, pp. 8024–8035. url: http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deeplearning-library.pdf. [47] Tilman Piesk. The 52 partitions of a 5-element set. url: https://en.wikipedia.org/wiki/File:Set_partitions_5;_circles.svg. [48] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. “Massively multitask networks for drug discovery”. In: arXiv preprint arXiv:1502.02072 (2015). [49] Alex Ratner, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher Ré. “Snorkel metal: Weak supervision for multi-task learning”. In: Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning. 2018, pp. 1–4. [50] Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher Ré. “Training complex models with multi-task weak supervision”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 4763–4771. [51] Ralf Raumanns, Elif K Contar, Gerard Schouten, and Veronika Cheplygina. “Multitask Learning with Crowdsourced Features Improves Skin Lesion Diagnosis”. In: arXiv preprint arXiv:2004.14745 (2020). [52] Sachin Ravi and Hugo Larochelle. “Optimization as a model for few-shot learning”. In: International Conference on Learning Representations. 2017. url: https://openreview.net/pdf?id=rJY0-Kcll. [53] Narges Razavian, Jake Marcus, and David Sontag. “Multi-task prediction of disease onsets from longitudinal laboratory tests”. In: Machine Learning for Healthcare Conference. 2016, pp. 73–100. [54] Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. “Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning”. In: International Conference on Learning Representations. 2018. url: https://openreview.net/forum?id=ry8dvM-R-. [55] Sebastian Ruder. “An overview of multi-task learning in deep neural networks”. In: arXiv preprint arXiv:1706.05098 (2017). [56] Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. “Latent multi-task architecture learning”. In: Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. 2019, pp. 4822–4829. [57] Huan Song, Deepta Rajan, Jayaraman J Thiagarajan, and Andreas Spanias. “Attend and diagnose: Clinical time series analysis using attention models”. In: Thirtysecond AAAI conference on artificial intelligence. 2018. [58] Trevor Standley, Amir R Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. “Which Tasks Should Be Learned Together in Multi-task Learning?” In: arXiv preprint arXiv:1905.07553 (2019). [59] Gjorgji Strezoski, Nanne van Noord, and Marcel Worring. “Many task learning with task routing”. In: Proceedings of the IEEE International Conference on Computer Vision. 2019, pp. 1375–1384. [60] Harini Suresh, Jen J Gong, and John V Guttag. “Learning tasks for multitask learning: Heterogenous patient populations in the icu”. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, pp. 802–810. [61] Jingru Tan, Changbao Wang, Buyu Li, Quanquan Li, Wanli Ouyang, Changqing Yin, and Junjie Yan. “Equalization Loss for Long-Tailed Object Recognition”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 11662–11671. [62] David Tellez, Diederik Hoppener, Cornelis Verhoef, Dirk Grunhagen, Pieter Nierop, Michal Drozdzal, Jeroen van der Laak, and Francesco Ciompi. “Extending Unsupervised Neural Image Compression With Supervised Multitask Learning”. In: arXiv preprint arXiv:2004.07041 (2020). [63] Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Dengxin Dai, and Luc Van Gool. “Revisiting Multi-Task Learning in the Deep Learning Era”. In: arXiv preprint arXiv:2004.13379 (2020). [64] Aswathy Velayudhan and Soniya Peter. “Noise Analysis and Different Denoising Techniques of ECG Signal-A Survey”. In: IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) (2016), eISSN–2278. [65] Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime Carbonell. “Characterizing and avoiding negative transfer”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 11293–11302. [66] Eric W Weisstein. “Bell Number”. In: MathWorld–A Wolfram Web Resource. (2002). url: https://mathworld.wolfram.com/BellNumber.html. [67] Dan Xu, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe. “Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 675–684. [68] Xueting Yan, Ishan Misra, Abhinav Gupta, Deepti Ghadiyaram, and Dhruv Mahajan. “ClusterFit: Improving Generalization of Visual Representations”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 6509–6518. [69] Cheng-Han Yeh, Yao-Chung Fan, and Wen-Chih Peng. “Interpretable Multi-task Learning for Product Quality Prediction with Attention Mechanism”. In: 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE. 2019, pp. 1910–1921. [70] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. “How transferable are features in deep neural networks?” In: Advances in neural information processing systems. 2014, pp. 3320–3328. [71] Ruoxi Yu, Yali Zheng, Ruikai Zhang, Yuqi Jiang, and Carmen CY Poon. “Using a multi-task recurrent neural network with attention mechanisms to predict hospital mortality of patients”. In: IEEE journal of biomedical and health informatics (2019). [72] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. “Taskonomy: Disentangling task transfer learning”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, pp. 3712–3722. [73] Junjie Zhang, Lingqiao Liu, Peng Wang, and Chunhua Shen. “To Balance or Not to Balance: An Embarrassingly Simple Approach for Learning with Long-Tailed Distributions”. In: arXiv preprint arXiv:1912.04486 (2019). [74] Zhenyu Zhang, Zhen Cui, Chunyan Xu, Yan Yan, Nicu Sebe, and Jian Yang. “Pattern-affinitive propagation across depth, surface normal and semantic segmentation”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, pp. 4106–4115. [75] Zhe Zhao, Lichan Hong, Li Wei, Jilin Chen, Aniruddh Nath, Shawn Andrews, Aditee Kumthekar, Maheswaran Sathiamoorthy, Xinyang Yi, and Ed Chi. “Recommending what video to watch next: a multitask ranking system”. In: Proceedings of the 13th ACM Conference on Recommender Systems. 2019, pp. 43–51. [76] Boyan Zhou, Quan Cui, Xiu-Shen Wei, and Zhao-Min Chen. “BBN: BilateralBranch Network with Cumulative Learning for Long-Tailed Visual Recognition”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, pp. 9719–9728.
|