|
1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019;25(1):44-56. 2. Benjamens S, Dhunnoo P, Mesko B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med 2020;3:118. 3. Calvert, J., Mao, Q., Hoffman, J. L., Jay, M., Desautels, T., Mohamadlou, H., ... & Das, R. (2016). Using electronic health record collected clinical variables to predict medical intensive care unit mortality. Annals of medicine and surgery, 11, 52-57. 4. Nanayakkara, S., Fogarty, S., Tremeer, M., Ross, K., Richards, B., Bergmeir, C., ... & Kaye, D. M. (2018). Characterising risk of in-hospital mortality following cardiac arrest using machine learning: A retrospective international registry study. PLoS medicine, 15(11), e1002709. 5. Pirracchio, R., Petersen, M. L., Carone, M., Rigon, M. R., Chevret, S., & van der Laan, M. J. (2015). Mortality prediction in intensive care units with the Super ICU Learner Algorithm (SICULA): a population-based study. The Lancet Respiratory Medicine, 3(1), 42-52. 6. He HB, Garcia EA. Learning from Imbalanced Data. Ieee T Knowl Data En 2009;21(9):1263-1284. 7. Roumani YF, May JH, Strum DP, et al. Classifying highly imbalanced ICU data. Health Care Manag Sc 2013;16(2):119-128. 8. Sun YM, Wong AKC, Kamel MS. Classification of Imbalanced Data: A Review. Int J Pattern Recogn 2009;23(4):687-719. 9. Kim NJ, Bang JH, Choi JY, et al. The 2018 Clinical Guidelines for the Diagnosis and Treatment of HIV/AIDS in HIV-Infected Koreans. Infect Chemother 2019;51(1):77-88. 10. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 2011;12:2825-2830. 11. Brown LD, Cai TT, DasGupta A, et al. Interval estimation for a binomial proportion. Stat Sci 2001;16(2):101-133. 12. Kohn M, Senyak J. Sample Size Calculators Confidence interval for a proportion. UCSF CTSI Available at https://www.sample-size.net.: Accessed December 5, 2020. 13. Stow PJ, Hart GK, Higlett T, et al. Development and implementation of a high-quality clinical database: the Australian and New Zealand intensive care society adult patient database. J Crit Care 2006;21(2):133-141. 14. Knaus WA, Draper EA, Wagner DP, et al. Apache-Ii - a Severity of Disease Classification-System. Crit Care Med 1985;13(10):818-829. 15. Zimmerman JE, Kramer AA, McNair DS, et al. Acute physiology and chronic health evaluation (APACHE) IV: Hospital mortality assessment for today's critically ill patients. Crit Care Med 2006;34(5):1297-1310. 16. Gunn PP, Fremont AM, Bottrell M, et al. The Health Insurance Portability and Accountability Act Privacy Rule: a practical guide for researchers. Med Care 2004;42(4):321-327. 17. Collins GS, Reitsma JB, Altman DG, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med 2015;162(1):55-63. 18. Leisman DE, Harhay MO, Lederer DJ, et al. Development and Reporting of Prediction Models: Guidance for Authors From Editors of Respiratory, Sleep, and Critical Care Journals. Crit Care Med 2020;48(5):623-633. 19. Chawla NV, Bowyer KW, Hall LO, et al. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research 2002;16:321-357. 20. He, H., Bai, Y., Garcia, E. A., & Li, S. (2008, June). ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence) (pp. 1322-1328). IEEE. 21. Troyanskaya O, Cantor M, Sherlock G, et al. Missing value estimation methods for DNA microarrays. Bioinformatics 2001;17(6):520-525. 22. Altman, N. S. (1992). An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3), 175-185. 23. Sharma, H., & Kumar, S. (2016). A survey on decision tree algorithms of classification in data mining. International Journal of Science and Research (IJSR), 5(4), 2094-2097. 24. Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. 25. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences 1997;55(1):119-139. 26. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794).
|