跳到主要內容

臺灣博碩士論文加值系統

(18.97.14.84) 您好!臺灣時間:2024/12/04 11:26
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

我願授權國圖
: 
twitterline
研究生:黃嘉慧
研究生(外文):Chia-Hui Huang
論文名稱:透過生成式AI提供個人化干預以提高學生學習成效
論文名稱(外文):Improving Student Learning Effectiveness Through Personalized Interventions Using Generative AI
指導教授:楊鎮華楊鎮華引用關係
指導教授(外文):Stephen J.H. Yang
學位類別:碩士
校院名稱:國立中央大學
系所名稱:資訊工程學系
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2024
畢業學年度:112
語文別:中文
論文頁數:46
中文關鍵詞:生成式AI程式練習題干預輔導活動Fleiss Kappa圖靈測試
外文關鍵詞:Generative AIProgramming ExercisesInterventionFleiss KappaTuring Test
相關次數:
  • 被引用被引用:0
  • 點閱點閱:34
  • 評分評分:
  • 下載下載:0
  • 收藏至我的研究室書目清單書目收藏:0
本研究旨在探討如何透過生成式AI技術產生程式設計教育中的個人化程式行為回饋建議,使學生在課堂中時能夠比一般複習活動更能提高學習成效;同時探討透過生成式AI所產生的程式練習題是否可以達到人工出題的品質。
為了實現符合學生的個人化程式回饋建議,此回饋基於學生編碼習慣去識別出的Coding pattern來產生相對應的回饋,並在每次課堂結束後給予學生個人化回饋以及考前的干預輔導活動,結果顯示有個人化干預回饋的幫助,學生學習成效確實有效提升。
為了評估題目的可信度,將透過生成式AI產生的程式練習題與人工題目混合並請專業人士進行評估,將評估結果進行Fleiss Kappa,而透過Kappa係數去驗證其題目的信度與一致性。同時在課堂中將生成式AI產生的題目與人工題目混雜著給學生在課堂上做練習題,並在每週課堂結束後,請學生透過自己在做題時的感覺猜測哪些題目為生成式AI出題哪些為人工出題,圖靈測試結果表明生成式AI產生的題目與人工題目在學生感受下是無法分辨的。
以上研究皆表明,生成式AI的出現可以帶來許多的便利性及可能性,除了減輕老師的壓力,也能夠確實幫助學生提高學習成效,未來許多的應用必定會使用到大量的生成式AI。
This study aims to explore how Generative AI technology can enhance personalized programming behavior feedback in programming education, enabling students to achieve better learning outcomes during classes compared to traditional review activities. Additionally, the study investigates whether programming exercises generated by Generative AI can achieve the same quality as those created by humans.
To provide personalized programming feedback, this feedback is based on identifying coding patterns from students' coding habits and generating corresponding feedback, which is given to students at the end of each class. The results show that the feedback helps improve students’ learning effectiveness.
To evaluate the reliability of the generated exercises, programming exercises produced by Generative AI will be assessed by professionals alongside manually created questions. The reliability and consistency of the questions will be validated using the Kappa coefficient of Fleiss' Kappa. During classes, both AI-generated and human-created questions will be mixed for students to solve. At the end of each weekly class, students will be asked to guess which questions were generated by AI and which were created by humans. The Turing test results indicate that it is impossible to distinguish between AI-generated and human-created questions.
The above findings demonstrate that the advent of Generative AI brings many conveniences and possibilities. In addition to reducing teachers' workload, it effectively helps students improve their learning outcomes. In the future, many applications will undoubtedly utilize extensive Generative AI.
摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vi
表目錄 vii
1. 緒論 1
2. 文獻探討 2
2.1. 生成式AI 2
2.2. 提示工程 3
2.3. Fleiss Kappa 3
2.4. 圖靈測試 4
3. 系統開發 4
3.1. 產生個人化程式行為回饋建議 4
3.2. 產生程式練習題 10
4. 研究方法 12
4.1. 課程設計 12
4.2. 學習系統 14
4.3. 基於Coding patterns的個人化干預 16
4.3.1. 回饋建議 17
4.3.2. 程式行為趨勢分析 18
4.3.3. 作答時間分析 19
4.3.4. 錯誤類型分析 20
4.4. 評估準則&評估題目流程 20
4.5. 學生課後題目評估(圖靈測試) 22
5. 結果 23
5.1. 接受生成式AI產生的個人化程式行為回饋建議干預的學生能否比接受傳統複習活動的學生有更高的學習成績 23
5.2. 程式練習題信度比較 25
5.3. 生成式AI出題與人工出題的差異分析 27
5.4. 討論 29
6. 結論 30
7. 未來研究與限制 31
附錄 33
參考文獻 35
Ahmadzadeh, M., Elliman, D., & Higgins, C. (2005). An analysis of patterns of debugging among novice computer science students. Proceedings of the 10th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education,
Ariza, C. (2009). The interrogator as critic: The turing test and the evaluation of generative music systems. Computer Music Journal, 33(2), 48-70.
Avella, J. T., Kebritchi, M., Nunn, S. G., & Kanai, T. (2016). Learning analytics methods, benefits, and challenges in higher education: A systematic literature review [Article]. Journal of Asynchronous Learning Network, 20(2). https://www.scopus.com/inward/record.uri?eid=2-s2.0-84975321434&partnerID=40&md5=85c3e4fbfb31f561497048bd7df36fa3
Blikstein, P., Worsley, M., Piech, C., Sahami, M., Cooper, S., & Koller, D. (2014). Programming Pluralism: Using Learning Analytics to Detect Patterns in the Learning of Computer Programming [Article]. Journal of the Learning Sciences, 23(4), 561-599. https://doi.org/10.1080/10508406.2014.954750
Bsharat, S. M., Myrzakhan, A., & Shen, Z. (2023). Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4. arXiv preprint arXiv:2312.16171.
Bush, J. T., Pogany, P., Pickett, S. D., Barker, M., Baxter, A., Campos, S., Cooper, A. W., Hirst, D., Inglis, G., & Nadin, A. (2020). A turing test for molecular generators. Journal of Medicinal Chemistry, 63(20), 11964-11971.
Chen, B., Zhang, Z., Langrené, N., & Zhu, S. (2023). Unleashing the potential of prompt engineering in large language models: a comprehensive review. arXiv preprint arXiv:2310.14735.
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative ai. Business & Information Systems Engineering, 66(1), 111-126.
Gan, W., Qi, Z., Wu, J., & Lin, J. C. W. (2023). Large Language Models in Education: Vision and Opportunities. Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023,
Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). A survey of Generative AI Applications. arXiv preprint arXiv:2306.02781.
Hassan, N. F. B., Puteh, S. B., & Sanusi, A. B. M. (2019). Fleiss's Kappa: Assessing the concept of technology enabled active learning (TEAL). Journal of Technical Education and Training, 11(1).
Hsiao, I. H., & Chung, C. Y. (2022). AI-infused Semantic Model to Enrich and Expand Programming Question Generation [Article]. Journal of Artificial Intelligence and Technology, 2(2), 47-54. https://doi.org/10.37965/jait.2022.0090
Hung, J. L., Hsu, Y. C., & Rice, K. (2012). Integrating data mining in program evaluation of K-12 online education [Article]. Educational Technology and Society, 15(3), 27-41. https://www.scopus.com/inward/record.uri?eid=2-s2.0-84873854463&partnerID=40&md5=5214a0ca3591fd00931f1be11b6de0b1
Jauhiainen, J. S., & Guerra, A. G. (2023). Generative AI and ChatGPT in school Children’s education: evidence from a school lesson. Sustainability, 15(18), 14025.
McDermott, K. B., Agarwal, P. K., D'Antonio, L., Roediger Iii, H. L. I., & McDaniel, M. A. (2014). Both multiple-choice and short-answer quizzes enhance later exam performance in middle and high school classes [Article]. Journal of Experimental Psychology: Applied, 20(1), 3-21. https://doi.org/10.1037/xap0000004
Mello, R. F., Freitas, E., Pereira, F. D., Cabral, L., Tedesco, P., & Ramalho, G. (2023). Education in the age of Generative AI: Context and Recent Developments. arXiv preprint arXiv:2309.12332.
Moons, F., & Vandervieren, E. (2023). Measuring agreement among several raters classifying subjects into one-or-more (hierarchical) nominal categories. A generalisation of Fleiss' kappa. arXiv preprint arXiv:2303.12502.
O'Toole, K., & Horvát, E.-Á. (2024). Extending Human Creativity with AI. Journal of Creativity, 100080.
Pereira, F. D., Oliveira, E. H. T., Oliveira, D. B. F., Cristea, A. I., Carvalho, L. S. G., Fonseca, S. C., Toda, A., & Isotani, S. (2020). Using learning analytics in the Amazonas: understanding students’ behaviour in introductory programming [Article]. British Journal of Educational Technology, 51(4), 955-972. https://doi.org/10.1111/bjet.12953
Rist, R. S. (1991). Knowledge creation and retrieval in program design: A comparison of novice and intermediate student programmers. Human-Computer Interaction, 6(1), 1-46.
Seljan, S. (2011). Translation technology as Challenge in education and business. Informatologia, 44(4), 279-286.
Song, D., Hong, H., & Oh, E. Y. (2021). Applying computational analysis of novice learners' computer programming patterns to reveal self-regulated learning, computational thinking, and learning performance [Article]. Computers in Human Behavior, 120, Article 106746. https://doi.org/10.1016/j.chb.2021.106746
Thotad, P. (2023). Automatic Question Generator Using Natural Language Processing.
Turing, A. M. (2009). Computing machinery and intelligence. Springer.
van der Lee, C., Gatt, A., van Miltenburg, E., & Krahmer, E. (2021). Human evaluation of automatically generated text: Current trends and best practice guidelines. Computer Speech & Language, 67, 101151.
Velásquez-Henao, J. D., Franco-Cardona, C. J., & Cadavid-Higuita, L. (2023). Prompt Engineering: a methodology for optimizing interactions with AI-Language Models in the field of engineering. Dyna, 90(230), 9-17.
Yan, L., Sha, L., Zhao, L., Li, Y., Martinez‐Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112.
Yan, L., Sha, L., Zhao, L., Li, Y., Martinez‐Maldonado, R., Chen, G., Li, X., Jin, Y., & Gašević, D. (2024). Practical and ethical challenges of large language models in education: A systematic scoping review. British Journal of Educational Technology, 55(1), 90-112.
林文涵 (2023). 根據 SHAP 解釋模型提供基於Coding pattern干預以提升學習
成效.
電子全文 電子全文(網際網路公開日期:20290801)
連結至畢業學校之論文網頁點我開啟連結
註: 此連結為研究生畢業學校所提供,不一定有電子全文可供下載,若連結有誤,請點選上方之〝勘誤回報〞功能,我們會盡快修正,謝謝!
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top