|
[1] A. H. Ashouri, W. Killian, J. Cavazos, G. Palermo, and C. Silvano, “A survey on compiler autotuning using machine learning,” ACM Comput. Surv., vol. 51, no. 5, sep 2018. [Online]. Available: https://doi-org.ezproxy.lib.nctu.edu.tw/10.1145/3197978 [2] G. Fursin, Y. Kashnikov, A. W. Memon, Z. Chamski, O. Temam, M. Namolaru, E. YomTov, B. Mendelson, A. Zaks, E. Courtois, F. Bodin, P. Barnard, E. Ashton, E. Bonilla, J. Thomson, C. K. I. Williams, and M. O’Boyle, “Milepost gcc: Machine learning enabled self-tuning compiler,” International Journal of Parallel Programming, vol. 39, no. 3, pp. 296–327, Jun 2011. [Online]. Available: https://doi.org/10.1007/s10766-010-0161-2 [3] T. Ben-Nun, A. S. Jakobovits, and T. Hoefler, “Neural code comprehension: A learnable representation of code semantics,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems, ser. NIPS’18. Red Hook, NY, USA: Curran Associates Inc., 2018, p. 3589–3601. [4] Y.-P. You and Y.-C. Su, “Reduced o3 subsequence labelling: a stepping stone towards optimisation sequence prediction,” Connection Science, vol. 0, no. 0, pp. 1–18, 2022. [Online]. Available: https://doi.org/10.1080/09540091.2022.2044761 [5] D. DeFreez, A. V. Thakur, and C. Rubio-González, “Path-based function embedding and its application to error-handling specification mining,” in Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ser. ESEC/FSE 2018. New York, NY, USA: Association for Computing Machinery, 2018, p. 423–433. [Online]. Available: https://doi-org.ezproxy.lib.nctu.edu.tw/10.1145/3236024.3236059 [6] U. Alon, M. Zilberstein, O. Levy, and E. Yahav, “A general path-based representation for predicting program properties,” in Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, ser. PLDI 2018. New York, NY, USA: Association for Computing Machinery, 2018, p. 404–419. [Online]. Available: https://doi-org.ezproxy.lib.nctu.edu.tw/10.1145/3192366.3192412 [7] U. Alon, M. Zilberstein, O. Levy, and E. Yahav, “Code2vec: Learning distributed representations of code,” Proc. ACM Program. Lang., vol. 3, no. POPL, jan 2019. [Online]. Available: https://doi.org/10.1145/3290353 [8] A. Brauckmann, A. Goens, S. Ertel, and J. Castrillon, “Compiler-based graph representations for deep learning models of code,” in Proceedings of the 29th International Conference on Compiler Construction, ser. CC 2020. New York, NY, USA: Association for Computing Machinery, 2020, p. 201–211. [Online]. Available: https://doi-org.ezproxy.lib.nctu.edu.tw/10.1145/3377555.3377894 [9] S. Dey, A. K. Singh, D. K. Prasad, and K. D. Mcdonald-Maier, “Socodecnn: Program source code for visual cnn classification using computer vision methodology,” IEEE Access, vol. 7, pp. 157 158–157 172, 2019. [10] C. Cummins, P. Petoumenos, Z. Wang, and H. Leather, “End-to-end deep learning of optimization heuristics,” in 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT), 2017, pp. 219–232. [11] A. Haj-Ali, N. K. Ahmed, T. Willke, Y. S. Shao, K. Asanovic, and I. Stoica, “Neurovectorizer: End-to-end vectorization with deep reinforcement learning,” in Proceedings of the 18th ACM/IEEE International Symposium on Code Generation and Optimization, ser. CGO 2020. New York, NY, USA: Association for Computing Machinery, 2020, p. 242–255. [Online]. Available: https://doi-org.ezproxy.lib.nctu.edu. tw/10.1145/3368826.3377928 [12] M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe, “SPEA2+: improving the performance of the strength Pareto evolutionary algorithm 2,” in PPSN, 2004. [13] S. VenkataKeerthy, R. Aggarwal, S. Jain, M. S. Desarkar, R. Upadrasta, and Y. N. Srikant, “Ir2vec: Llvm ir based scalable program embeddings,” ACM Trans. Archit. Code Optim., vol. 17, no. 4, dec 2020. [Online]. Available: https: //doi-org.ezproxy.lib.nctu.edu.tw/10.1145/3418463 [14] R. Mammadli, A. Jannesari, and F. A. Wolf, “Static neural compiler optimization via deep reinforcement learning,” 2020 IEEE/ACM 6th Workshop on the LLVM Compiler Infrastructure in HPC (LLVM-HPC) and Workshop on Hierarchical Parallelism for Exascale Computing (HiPar), pp. 1–11, 2020. [15] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength Pareto evolutionary algorithm,” TIK-report, vol. 103, 2001. [16] Y. Liu, H. Ishibuchi, G. G. Yen, Y. Nojima, and N. Masuyama, “Handling imbalance between convergence and diversity in the decision space in evolutionary multimodal multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 24, no. 3, pp. 551–565, 2020. [17] A. Hussain, Y. Muhammad, and M. Sajid, “An efficient genetic algorithm for numerical function optimization with two new crossover operators,” International Journal of Mathematical Sciences and Computing, vol. 4, pp. 41–55, 2018.
|