|
Bishop, C. M. 2006. Pattern recognition and machine learning: springer. bravotty. 2020. Information-entropy-loss-pytorch, March 31 2020 [cited December 15 2020]. Available from https://github.com/bravotty/Information-entropy-loss-pytorch/blob/master/entropy_loss_pytorch.py. De Fuentes, C., and R. Porcuna. 2019. Predicting audit failure: evidence from auditing enforcement releases. Spanish Journal of Finance and Accounting/Revista Española de Financiación y Contabilidad 48 (3):274-305. DeAngelo, L. E. 1981. Auditor size and audit quality. Journal of accounting and economics 3 (3):183-199. Dechow, P. M., R. G. Sloan, and A. P. Sweeney. 1995. Detecting earnings management. Accounting review:193-225. Glorot, X., and Y. Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. Paper read at Proceedings of the thirteenth international conference on artificial intelligence and statistics. Hinton, G. E., S. Osindero, and Y.-W. Teh. 2006. A fast learning algorithm for deep belief nets. Neural computation 18 (7):1527-1554. Hoskiss. 2020. [機器學習] Backpropagation with Softmax / Cross Entropy 2019 [cited December 2 2020]. Available from https://medium.com/hoskiss-stand/backpropagation-with-softmax-cross-entropy-d60983b7b245. Jones, J. J. 1991. Earnings management during import relief investigations. Journal of accounting research 29 (2):193-228. Kothari, S. P., A. J. Leone, and C. E. Wasley. 2005. Performance matched discretionary accrual measures. Journal of accounting and economics 39 (1):163-197. Krishnan, G. V. 2003. Audit quality and the pricing of discretionary accruals. Auditing: A journal of practice & theory 22 (1):109-126. Li, L., B. Qi, G. Tian, and G. Zhang. 2015. The Contagion Effect of Low-Quality Audits along Individual Auditors. Available at SSRN 2478348. ML-Glossary. Loss Functions 2017 [cited. Available from https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html. Nielsen, M. CHAPTER 4 A visual proof that neural nets can compute any function 2019 [cited. Available from http://neuralnetworksanddeeplearning.com/chap4. Rumelhart, D. E., G. E. Hinton, and R. J. Williams. 1986. Learning representations by back-propagating errors. nature 323 (6088):533-536. Saito, T., and M. Rehmsmeier. 2015. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PloS one 10 (3):e0118432. ufoym. 2020. Imbalanced Dataset Sampler, October 9 2020 [cited December 15 2020]. Available from https://github.com/ufoym/imbalanced-dataset-sampler. Zmijewski, M. E. 1984. Methodological issues related to the estimation of financial distress prediction models. Journal of accounting research:59-82. 吳琮璠教授. 2001. 審計學--新觀念與本土化. 台北市: 吳琮璠教授. 李宏毅. 2016a. ML Lecture 1: Regression - Case Study. YouTube. https://www.youtube.com/watch?v=fegAeph9UaA. ———. 2016b. ML Lecture 2: Where does the error come from? . YouTube. https://www.youtube.com/watch?v=D_S6y0Jm6dQ. ———. 2016c. ML Lecture 3-1: Gradient Descent. YouTube. https://www.youtube.com/watch?v=yKKNr-QKz2Q. ———. 2016d. ML Lecture 6: Brief Introduction of Deep Learning. YouTube. https://www.youtube.com/watch?v=Dr-WRlEFefw. ———. 2016e. ML Lecture 7: Backpropagation. YouTube. https://www.youtube.com/watch?v=ibJpTrp5mcE. ———. 2016f. ML Lecture 9-1: Tips for Training DNN. YouTube. ———. 2016g. ML Lecture 11: Why Deep? YouTube. https://www.youtube.com/watch?v=XsC9byQkUH8. ———. 2016h. ML Lecture 12: Semi-supervised. YouTube. https://www.youtube.com/watch?v=fX_guE7JNnY. ———. 2017a. ML Lecture 5: Logistic Regression. YouTube. https://www.youtube.com/watch?v=hSXFuypLukA. ———. 2017b. ML Lecture 22: Ensemble. YouTube. https://www.youtube.com/watch?v=tH9FH1DH5n0. 陳怡均. 2008. 簡介美國PCAOB 對於公開公司會計師之監理. 金融監督管理委員會 2008 [cited October 16 2008]. Available from https://www.fsc.gov.tw/fckdowndoc?file=/%E5%AF%A6%E5%8B%99%E6%96%B0%E7%9F%A5%20(1).pdf&flag=doc. 蔡孟瑾. 2015. 審計失敗之傳染效果-以台灣為例. 臺灣大學會計學研究所學位論文:1-48.
|