跳到主要內容

臺灣博碩士論文加值系統

(18.97.9.175) 您好!臺灣時間:2024/12/10 16:10
字體大小: 字級放大   字級縮小   預設字形  
回查詢結果 :::

詳目顯示

: 
twitterline
研究生:簡立仁
研究生(外文):Li-Ren Chien
論文名稱:一個植基於解析樹具測試驅動的自動程式評分系統
論文名稱(外文):DICE, a Parse-Tree Based Automatic Programming Assessment System with Test Driven Development
指導教授:貝若爾
指導教授(外文):DANIEL J. BUEHRER
學位類別:博士
校院名稱:國立中正大學
系所名稱:資訊工程所
學門:工程學門
學類:電資工程學類
論文種類:學術論文
論文出版年:2009
畢業學年度:97
語文別:英文
論文頁數:166
中文關鍵詞:解析樹學習型態型態化心智圖測試驅動電子學習自動化程式評分系統
外文關鍵詞:Parse-TreeLearning StyleTyped Mind MapsTDDE-LearningAPAS
相關次數:
  • 被引用被引用:0
  • 點閱點閱:999
  • 評分評分:
  • 下載下載:41
  • 收藏至我的研究室書目清單書目收藏:0
正義
我們於2006年正式將我們的自動程式評分系統(APAS)命名為DICE,一個羅馬神話下的女神,象徵著正義的人格特質.我們的目標在於建立一個線上自動評分系統用於評閱程度語言作業. 正義是DICE極重要的特性之一, 我們在混成式學習環境下由教師的監督下使用DICE,獲得56%學習成效的提昇. 同時我們也強調DICE的防剽竊機制發揮了揚善罰惡的正義真義.

邊界
孔夫子主張"有教無類". 作為一個特殊的學習管理機制, DICE扮演著將訓練學生程式語言相關資訊集中管理及轉遞的角色. DICE可以檢查程式作業中特定主題的知識概念. DICE被設計成網路上的獨立服務,可不計對象提供跨越時空的教育服務.

實踐
孔夫子主張"學而時習之".DICE被視為一個典型的講授教學的延深訓練方法,結果是學生於課堂上付出更多的努力透過實作來改善他們的程式設計技巧.

循序
孔夫子主張"循序漸進". 我們應用軟體工程的"測試驅動發展"(TDD)概念於DICE中. DICE TDD模組借用正式的數學定義,將訓練材料依技巧與觀念.由探索學習模式至引導學習模式,共分為16種模式. DICE TDD 幫助系統將程式碼按技巧及觀念來檢驗.我們證明這個模組可以有效的對程式設計低成就的同學有極大的幫助.

適性
孔夫子主張"因材施教". 我們的研究節果顯示,學生的kolb學習型態,對於使用TDD及不使用TDD的學習成效,有顯著的差異.我們據此規劃了一個適性學習模組於DICE中(DALM),DAML係依照學生的個別差異調適學生至不同TDD模式.

我們也發展了一個型態化心智圖系統作為共同的資料模組及知識表達方法.
經評估,DICE可比不使用DICE的傳統訊練方法提昇11%學習成效.而DICE加上TDD模組又比純粹使用DICE的訓練方法可提昇51%學習成效.
JUSTICE
DICE, the personification of justice, a goddess of Roman mythology, was taken as the name of our Automatic Programming Assessment System (APAS) since 2006. We aimed to build an on-line auto-grading system for judging programming assignments. Justice is one of the most important characteristics in DICE, which has been used in blended learning in the classroom under an instructor’s supervision. We emphasize the automatic assessment system and plagiarism detection mechanism to guard against injustice by punishing injustice and rewarding virtue.

BOUNDARY
Confucius argued that “Education knows no boundaries.” As a specialized learning management system, DICE plays the role of delivering general purpose assignments and exercises. DICE checks programming exercises and conceptual knowledge in a specific topic area. DICE is designed as a stand-alone service to support the education across space and time.

Practice
Confucius claimed that “To learn and then have occasion to practice what you have learned—is this not satisfying?” DICE is treated as an augmented training approach to support the classical lecture-based teaching for computing. The result is the students put more effort on improving their programming skills via practice in class.

Sequence
Confucius argued for “Improvement in proper sequence.” We have made use of the method of ‘Test Driven Development” (TDD) which was introduced in software engineering. The DICE TDD classifies the training material from exploration to instruction into 16 levels. It supports DICE to help check student’s code, in terms of both his programming skill and his conceptual knowledge of a specific topic.

Adaptation
Confucius said, “Teach students in accordance with their aptitude.” Our results indicate that the influence of Kolb learning style has different significance on learning outcomes in TDD verse non-TDD. An adaptive learning model was proposed to adapt to students with different TDD training materials based on their personality differences.

We also developed a Typed Mind Map as a common data model and knowledge representation in DICE. DICE promotes learning performance up to 11% better than a non-DICE approach, while a DICE with TDD model improves the learning performance to 51% more than the pure DICE approach.
ABSTRACT ii
List of Tables v
List of Figures vi
1 Introduction 1
1.1 Motivation, Objectives and Significance 1
1.2 DICE, a parse-tree based online automatic programming assessment system (APAS) 3
1.3 Test Driven Development in DICE 4
1.4 Using Typed Mind Maps as a data model in DICE 5
1.5 Evaluation 5
1.6 Contribution 6
2 Literature Review and Background 8
2.1 Automatic Graders 8
2.1.1 Computer Augmented Learning Management Systems8
2.1.2 Teaching Software Testing 10
2.1.3 Automatic Grading 11
2.1.4 First Generation - Early Assessment Systems: 13
2.1.5 Second Generation - Tool-Oriented System: 14
2.1.6 Third Generation - Web-Oriented Systems: 15
2.1.7 Automated Assessment Approaches for Programming Assignments 16
2.1.7.1 Dynamic analysis approach 16
2.1.7.2 Static analysis approach 18
2.2 Test Driven Development 24
2.2.1 History of TDD 25
2.2.2 Recently Research about TDD in Education 25
2.3 Learning Style 29
2.3.1 Definition of Learning Style theories 29
2.3.2 Kolb’s Learning Style Theory 30
2.4 Training Method 32
2.5 Mind Maps 33
3 Overview of DICE 35
3.1 Architecture 35
3.2 Making Parse Trees 40
3.3 Grading 42
3.4 Plagiarism detection 42
3.5 Implementation 43
3.6 Screenshots 45
3.6.1 Server Events 45
3.6.2 Server Side Current Events 45
3.6.3 Monitoring and Interacting on/with a Client 46
3.6.4 Client Side 47
3.6.5 Grading Result 48
3.6.6 Instructor on Client Side 48
3.6.7 DICE Server with Typed Mind Maps Representation 48
3.6.8 Judging Results with TDD Hints 49
3.6.9 Judging the Result Time of a Student’s Judge Thread 51
3.6.10 Judging Result with Plagiarism Detection 51
4 TDD model in DICE 53
4.1 TDD in an automatic grading system 54
4.1.1 A Thorough Example 54
4.1.2 Test-based data sets versus TDD data sets 55
4.1.3 TDD of programming skill vs. TDD of concept 56
4.2 An Example with C 62
4.2.1 The problem description 63
4.2.2 The test units 63
4.2.3 The sample test-driven code 64
4.2.4 A Implementation of DICE TDD 66
4.2.4.1 The TDD test suite node 66
4.2.4.2 The user interface of DICE TDD model 67
5 Using a Typed Mind Map as a Data Model in a TDD DICE System 70
5.1 Mind Map 70
5.1.1 Formal Definition of the Mind Map 71
5.1.2 Informal Definition of the Typed Mind Map 73
5.1.3 An Implementation of the Typed Mind Map 75
5.2 Using Typed Mind Map in DICE 76
5.2.1 Training Material Organization for DICE TDD model 77
5.2.1.1 Storage Policies 77
5.2.1.2 The DICE TDD data model using Typed Mind Maps. 79
5.2.2 A Widget for Evaluating Learner''s Kolb Learning Style 81
5.2.3 Answer Parsing - Handling C code 83
5.2.3.1 Parsing C code to its Parse Tree into a TMM 85
5.2.3.2 Retrieving Functions from a C Code to a TMM 85
5.2.3.3 Converting C code to a TDD Test Suite 86
5.2.3.4 System Configuration 86
6 Evaluations 88
6.1 Evaluations of DICE TDD Learning Performance 88
6.1.1.1 DICE versus Non-DICE 88
6.1.1.2 Evaluation 88
6.1.1.3 Research Model 89
6.1.1.4 Experimental Design 90
6.1.1.5 Data Analysis 90
6.1.2 TDD verse Non-TDD 93
6.1.2.1 Evaluation 93
6.1.2.2 Research Model 94
6.1.2.3 Experimental Design 94
6.1.2.4 Data Analysis 95
6.1.2.5 Results 97
6.1.3 TDD versus Non-TDD with Kolb Learning Style 98
6.1.3.1 Evaluation 98
6.1.3.2 Research Model 98
6.1.3.3 Experimental Design 101
6.1.3.4 Data Analysis 101
6.1.4 Comparison with other TDD evaluations 104
6.1.5 Removing the TDD Roadblocks 106
6.2 Comparison with other APASs 106
7 Related work 110
7.1 DICE Adaptive Learning Model (DALM) 110
7.2 A Visual Lambda Calculator using Typed Mind Map 113
7.2.1 Motivating Example 114
7.2.1.1 VLM representation of the Y Combinator 116
7.2.1.2 Lambda Expression Terms. 116
7.2.1.3 Lambda Calculus Operations 118
7.2.2 The Lambda Calculus Flow in VLM 118
7.2.3 The Automatic Combinator Generator in VLM 119
7.2.4 The α-conversion and η-reduction 121
8 Conclusions and Future work 123
8.1 Future Work 123
8.1.1 About DICE 123
8.1.2 About the DICE TDD Model 123
8.1.3 About Typed Mind Maps 123
8.1.4 About DAML 124
8.2 Conclusion 125
REFERENCES 128
A-1: ACM Q476 138
A-2: An answer of Q476 140
A-3: Test suite TMM of Q476 which is generated by DICE 143
A-4: Function TMMs of Q476 which is generated by DICE 144
B-1: ACM Q477 145
B-2: An Answer of Question Q477 147
B-3: Test suite TMM of Q477 which is generated by DICE 150
B-4: Function: TMMs of Q477 which is generated by DICE 151
C-1: Mind Maps of teaching unit ACM 152
C-2: Test Suite C1=Q476 152
C-3: Test Suite C2=Q477 153
H.1 Visual Lambda Expressions in VLM 159
H.2 Operations 163
H.3 The Well-Defined VLM Visual Lambda Expression 166
1.Aiken, A. (2007). A System for Detecting Software Plagiarism. [cited March 2009]. http://theory.stanford.edu/~aiken/moss/.
2.Ala-Mutka, K, Uimonen T., Järvinen H. M. (2004). Supporting Students in C++ Programming Courses with Automatic Program Style Assessment. Journal of Information Technology Education,Vol 3.
3.Ala-Mutka, Kirsti M. (2005, June). A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education 15, 83-102.
4.Allowatt, A. & Edwards, S.H. (2004). IDE support for test-driven development and automated grading in both Java and C++, Proc. Eclipse Technology Exchange (eTX) Workshop at OOPSLA, ACM, 100-104.
5.Ausubel, D.P. (1963). The psychology of meaningful verbal learning, New York, Grune and Stratton
6.Badros, Greg. (2000). JavaML: An XML-based Source Code Representation for Java Programs 2000 [cited March 2009]. http://badros.com/greg/JavaML/.
7.Bandura, A. (1986). Social foundations of thought and action: a social cognitive theory, Englewood Cliffs, NJ: Prentice-Hall
8.Barriocanal E., Urban M., Cuevas I., & Perez P., (2003). An Experience in Integrating Automated Unit Testing Practices in an IntroductoryProgramming Course. ACM SIGCSE Bulletin, 34(4), 125–128.
9.Bergin S., & Reilly R., (2005). Programming: Factors that Influence Success, Technical Symposium on Computer Science Education, Proceedings of the 36th SIGCSE technical symposium on Computer Science Education, 411-415.
10.Bostrom, R.P. (1990), The importance of learning style in End-User training, MIS Quarterly 14(1), pp.101-119
11.Brenda C., Andy K., Andrew Lim & Wee-Chong O. (2004). On Automated Grading Of Programming Assignments In An Academic Institution, Computers and Education, 121-131
12.Bruner, J. (1996). Toward a theory of instruction. New York: W.W.Norton
13.Brynda, J. (1992). End User Training: Lend me your ear, I will teach you a PC package, Computing Canada, 18(1), 48
14.Buzan. (1991). The Mind Map Book: Mind Mapping Guidelines, Penguin, New York.
15.Byrne, P. & Lyons, G. (2001). The Effect of Student Attributes on Success in Programming, ITiCSE: Proceedings of the 6th annual conference on Innovation and technology in computer science. ACM press, 49-52
16.C.P. Wadsworth. (1971). Semantics and Pragmatics of the Lambda Calculus, D.Phil. Thesis, Oxford University.
17.Carter, J., English, J., Ala-Mutka, K., Dick, M., Fone, W., Fuller, U., & Sheard, J. (2003). How shall we assess this? ACM SIGCSE Bulletin, 35(4), 107-123.
18.Champollion, L., J. Tauberer and M. Romero. (2007, July). The Penn Lambda Calculator: Pedagogical Software for Natural Language Semantics, in T. Holloway King and E. M. Bender (eds.), Proceedings of the Grammar Engineering across Frameworks (GEAF).
19.Chen, P. (2004). An Automated Feedback System for Computer Organization Projects. IEEE Transactions on Education, 47, 232–240.
20.Chetan Desai, Davis Janzen & Kyle Savage, (2008). A Survey of Evidence for Test-Driven Development in Academia, SIGCSE Bullentin, 40(2).
21.Chou H.W. & Wang W.B. (2000). The influence of learning style and training method on self-efficacy and learning performance in WWW homepage design training, International Journal of Information Management (20:6) , 455-472
22.Christopher, D., David, L. & Jams, O. (2005, September). Automatic Test-Based Assessment of Programming: A Review, ACM Journal of Educational Resources in Computing, 5(3), Article 4.
23.Christopher G. Jones, (2004). Test-driven development goes to school, Journal of Computing Sciences in Colleges, 20(1), 220-231.
24.Cross, K.P. (1981). Adults as learners: increasing participation and facilitating learning, Jossey-Bass, Inc., San Francisco, CA.
25.David S. Janzen & Hossein Saiedian. (2006, March). Test-Driven Learning: Intrinsic Integration of Testing into the CS/SE Curriculum, SIGCSE 6, Houston,Texas, USA.
26.Davis, F.D. & Yi, M.Y. (2004), Improving Computer Skill Training: Behavior Modeling, Symbolic Mental Rehearsal, and the Role of Knowledge Structures, Journal of Applied Psychology, 89(3), 509-523
27.Davis, S.D. (1989). Training novice users of computer systems: the roles of the computer interface, training methods and learner characteristics, doctoral dissertation, Indiana University Bloomingtom, IN
28.Deitel, H.M. & Deitel P.J. (2004). C How to Program Fourth Edition, Prentice Hall.
29.Dimitry Polivaev. (2008). FreeMind - free mind mapping software, FreeMind Official Homepage & Wiki [cited March 2009] , Available at http://freemind.sourceforge.net/wiki/index.php/Main_Page
30.Don Colton., Leslie Fife., and Andrew Thompson., (2006). A Web-based Automatic Program Grader, Proc ISECON 2006, v23.
31.Dunn, R., (2000). Capitalizing on college students’ learning styles: theory, practice, and research, in Practice Approaches to using learning style in higher education.
32.Edward L. Jones. (2001). Grading Student Programs - A Software Testing Approach, Journal of Computing in Small Colleges 16(2), 195-192.
33.Edwards, S.H. (2003). Improving student performance by evaluating how well students test their own programs. J. Educational Resources in Computing, 3(2), 1-24.
34.Edwards, S.H. (2003). Teaching software testing: automatic grading meets test-first coding. In proceedings of the OOPSLA’03 conference. Poster presentation, 318-319
35.Edwards S., (2004). Using Software Testing to Move Students from TrialandError to ReflectioninAction.ACM SIGCSE Bulletin, 36(1), 26–30.
36.Ellsworth, C., Fenwick, J., & Kurtz, B. (2004). The Quiver System. In Proceedings of the 35thSIGCSE technical symposium on Computer Science Education, US, 205–209.
37.English, J. (2004). Automatic Assessment of GUI Programs using JEWL. In Proceedings of 9th annual conference on Innovation and technology in computer science education, UK, 137–141.
38.Erdogmus, H., Morisio, M., and Torchiano,M. (2005, March). On the Effectiveness of the TestFirst Approach to Programming. IEEE Trans. SoftwareEng., 31(3), 226–237.
39.Esmond Pitt and Kathleen McNiff. (2001). java.rmi: The Guide to Remote Method Invocation. Addison-Wesley, Boston, MA.
40.Foxley, E. Higgins, C, & Gibbon, C. (1996). The Ceilidh System : A General Overview. [cited March 2009]. http://www.cs.nott.ac.uk/~cmp/more_info/html/Overview96.htm
41.Foxley, E. (1999). Ceilidh Documentation on the World Wide Web. [cited March 2009], http://www.cs.nott.ac.uk/~ceilidh/papers.html.
42.Gagnone, Hendren L J. (1998). SableCC-an Object-oriented Compiler Framework. Proceedings of Tools 26: Technology of Object-Oriented Languages.
43.Glaster, R., Variables In Discovery Learning. In L.S. Sgulman, & E.R. Keislar, (Eds.).(1974). Learning By Discovery: A Critical Appraisal, Chicago: Rand McNally, 1966Supervisor Behavior, New YORK: Pergamon
44.Goldwasser, M.H. (2002). A gimmick to integrate software testing throughout the curriculum. In Proc. 33rd. SIGCSE Technical Symp. Computer Science Education, ACM Press, 271-275.
45.Goold, A. and Rimmer R. (2000). Factors Affecting Performance in First-year Computing,” SIGCSE Bull., 32(2), 39-43
46.Granville, Andrew. (2002). Detecting Plagiarism in Java Code, University of Sheffield. [cited March 2009] . http://www.dcs.shef.ac.uk/intranet/teaching/projects/archive/ug2002/pdf/u9arg.pdf
47.Gruber, T.R. (1993).A Translation Approach to Portable Ontology Specification, Knowledge Acquisition 5: 199-220.
48.H. Barendregt. (1992). Lambda Calculi with Types. Handbook of Logic in Computer Science, Oxford University Press.
49.Hansen, H., & Ruuska, M. (2003). Assessing time-efficiency in a course on data structures and algorithms. In Proceedings of the 3rd Annual Finnish/Baltic Sea Conference on Computer Science Education, Finland.
50.Higgins, C., Hergazy, T., Symeonidis, P., & Tsinsifas, A. (2003). The CourseMarker CBA system: Improvements over Ceilidh. Education and Information Technologies, 8, 287–304.
51.Hilburn, T.B., & Towhildnejad, M. (2000). Software quality: A curriculum postscript? In Proc. 31st SIGCSE Technical Symp. Computer Science Education, ACM Press, 167-171.
52.Hine, N., Rentoul, R. & Specht, M. (2004).Collaboration and Roles in Remote Field Trips. In Attewell, J. and Savill-Smith, C. Eds.Learning with Mobile Devices: Research and Development 2004. Learning and Skills Development Agency, London, UK, 69-72.
53.Isong, J. (2001). Developing an automated program checker. J. Computing in Small Colleges, 16(3), 218-224.
54.Jackson, D., & Usher, M. (1997). Grading Student Programs using ASSYST. Proceedings of the 28th SIGCSE technical symposium on Computer science education, USA, 335–339.
55.Jackson, D., & Usher, M. (1997). Grading student programs using ASSYST. In Proc. 28th SIGCSE Technical Symp. Computer Science Education, ACM Press, 335-339.
56.Janzen D. & Saiedian H., (2007). A Leveled Examination of Test-Driven Development Acceptance. In Proc. 29th Int’ l Conf. on Software Engineering (ICSE), 719–722.
57.Janzen D. and Saiedian H., (2005). Test-Driven Development: Concepts, Taxonomy, and Future Direction. IEEE Computer, 38(9), 43–50.
58.Janzen D. and Saiedian H., (2006). Test-Driven Learning: Intrinsic Integration of Testing into the CS/SE Curriculum. In Proc. 37th Technical Symposium on Computer Science Education (SIGCSE), 254–258.
59.Janzen D. and Saiedian H., (2008). Test-Driven Learning in Early Programming Courses. In Proc. 39th Technical Symposium on ComputerScience Education (SIGCSE). ACM.
60.Jeffries R. and Melnik G., (2007). TDD: The Art of Fearless Programming. IEEE Software, 24(3), 24–30.
61.John W. Budd., (2004). Mind Maps as Classroom Exercise, Journal of Economic Education
62.Jones, E.L. (2000a). Software testing in the computer science curriculum—a holistic approach. In Proc. Australasian Computing Education Conf., ACM Press, 153-157.
63.Jones, E.L (2000b). SPRAE: A framework for teaching software testing in the undergraduate curriculum. In Proc. ADMI 2000, Hampton, VA, 1-4 June 2000.
64.Jones, E.L. (2001a). Integrating testing into the curriculum—arsenic in small doses. In Proc. 32nd SIGCSE Technical Symp. Computer Science Education, ACM Press, 337-341.
65.Jones, E.L. (2001b). An experiential approach to incorporating software testing into the computer science curriculum. In Proc. 2001 Frontiers in Education Conf. (FiE 2001), F3D7-F3D11.
66.Jones, E.L. (2000). Grading student programs—a software testing approach. Computing in Small Colleges, 16(2), 185-192.
67.Jones, C., (2004). Test-Driven Development Goes to School. Journal of Computing Sciences in Colleges, 20(1), 220–231.
68.Joy, M. & Luck, M. (1999). Plagiarism in Programming Assignments. IEEE Transactions on Education, 42(1) , 129-133
69.Joy, M & Luck, M. (1998). The BOSS System for On-line Submission and Assessment [cited 2007]. http://www.ulster.ac.uk/cticomp/joy.html
70.Kaufmann R. and Janzen D. (2003). Implications of Test-Driven Development: A Pilot Study. In Companion of the 18th Ann. ACM SIGPLAN Conf. ObjectOriented Programming, Systems, Languages, and Applications, pages 298–299.
71.Keefe K., Sheard, J. & Dick M., (2006). Adopting XP Practices for Teaching Object Oriented Programming. In Proc. 8th Australian Conf.Computing Education, volume 52, pages 91100.
72.Khirulnizam A. Rahman, & Md. Jan Nordin. (2007). A Review on the Static Analysis Approach in the Automated Programming Assessment Systems. In: National Conference on Programming 07, 5 December 2007, Kuala Lumpur, Malaysia.
73.Kolb, D.A. and Fry, R. (1975). Toward an applied theory of experiential learning. In Thories of Group Process, G.L. Cooper(ed.), John Wiley and Sons, Inc., New York, NY, 33-54
74.Kolb, D.A. (1976). The Learning Style Inventory Technical Manual, Mcber and Company, Boston, MA
75.Kolb, D.A. and Fry, R. (1995). Toward an applied theory of experiential learning, in Theories of Group Processes, G.L. Cooper (ed.), John Wiley and Sons, Inc., New York, NY, 33-54
76.Kolb A.Y., Kolb D.A. (2005). The Kolb’s learning style inventory-version 3.1 2005 technical specifications, Boston, MA: Hay Resource Direct.
77.L Thomas, M Ratcliffe, J Woodbury, E Jarman, (2002). Learning Styles and Performance in the Introductory Programming Sequence, Proceedings of the 33rd SIGCSE technical symposium on ACM
78.Luck, M. & Joy, M. (1999). A secure on-line submission system. Software—Practice and Experience, 29(8), 721-740.
79.M. Erwig. (1998). Abstract Syntax and Semantics of Visual Languages. Journal of Visual Languages and Computing, 9(5):461–483.
80.M.R. Sleep, M.J. Plasmeijer, M.C.J.D. van Eekelen. (1993). Term Graph Rewriting—Theory and Practice,Wiley, Chichester.
81.Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Seppälä, O. and Silvasti, P. (2004). Visual Algorithm Simulation Exercise System with Automatic Assessment: TRAKLA2. Informatics in Education, 3(2), 267-288.
82.Marini Abu Bakar, Norleyza Jailani, Sufian Idris. (2002). Pengaturcaraan C. Kuala Lumpur: Prentice Hall.
83.McCabe, T. J. (1976). A Complexity Measure. IEEE Transaction of Software Engineering 4 (SE-2):308-320.
84.Mccauley, R., Archer, C., Dale, N., Mili, R., RobergÉ, J., and Taylor, H. (1995). The effective integration of the software engineering principles throughout the undergraduate computer science curriculum. In Proc. 26th SIGCSE Technical Symp. Computer Science Education, ACM Press, 364-365.
85.Mccauley, R., Dale, N., Hilburn, T., Mengel, S., and Murrill, B.W. (2000). The assimilation of software engineering into the undergraduate computer science curriculum. In Proc. 31st SIGCSE Technical Symp. Computer Science Education, ACM Press, 423-424.
86.McGuinness, D.L. and Harmelen, F. van eds. OWL Web Ontology Language Overview, World Wide Web Consortium (W3C) recommendation, [cited March 2009]. Available at http://www.w3.org/TR/owl-features.
87.McQuain, W. (2004). Curator: An electronic submission management environment. Web page last updated at July 15. (http://courses.cs.vt.edu/curator/)
88.Melnik G. and Maurer F., (2005). A CrossProgram Investigation of Students’ Perceptions of Agile Methods. In Proc. 27th Int’ l Conf. on Software Eng. (ICSE), 481–488.
89.Mengel, S.A., Yerramilli, V. (1999). A case study of the static analysis of the quality of novice student programs. In Proc. 30th SIGCSE Technical Symp. Computer Science Education, ACM, 78-82.
90.Michael, Jeronimo, Jack Weast. UPnP Design by Example: A Software Developer''s Guide to Universal Plug and Play, Intel Press, ISBN 0-9717861-1-9
91.Mitrovic, A. (1998). Learning SQL with a computerized tutor. SIGCSE Bulletin, 30( 1), 307-311
92.Morris, D. (2003). Automatic Grading of Student’s Programming Assignments: An Interactive Process and Suite of Programs. In Proceedings of the 33rd ASEE/IEEE Frontiers in Education Conference, S3F-1–S3F-5.
93.Muller M. and Hagner O. (2002). Experiment About Test-First Programming. IEEE Proc. Software, 149(5), 131–136.
94.Muller M. & Tichy E., (2001). Case Study: Extreme Programming in a University Environment. In Proc. 23th Int’ l Conf. on Software Eng.(ICSE), 537–544.
95.Naps, T. L. 2005. JHAVÉ -- Addressing the Need to Support Algorithm Visualization with Tools for Active Engagement. IEEE Computer Graphics and Applications, 25( 5), 49-55.
96.Norman, D.A. (1983). “Some Observations on Mental Models”, Mental Models, A.L. Stevens and D. Genter (ends.), Lawrence Kawrence Erlbaum Associates, Hillsdale, NJ, 7-1
97.Norshuhani Zamin, Emy Elyanee Mustapha, Savita K.Sugathan, Mazlina Mehat, Ellia, and Anuar. (2006). Development Of A Web-Based Automated Grading System For Programming Assignments Using Static Analysis Approach. Paper read at International Conference on Electrical and Informatics, at Bandung, Indonesia.
98.Pancur, M.M., Ciglaric, M. Trampus, and Vidmar T. (2003). Towards Empirical Evaluation of TestDriven Development in a University Environment. In IEEE Region 8 Proc. EUROCON, volume 2, 83–86.
99.Parlante, N. JavaBat java practice problems. (2009). [cited March 2009]. available from http://javabat.com.
100.Pawlak, Zdzisław. (1991). Rough Sets: Theoretical Aspects of Reasoning About Data. Dordrecht: Kluwer Academic Publishing. ISBN 0-7923-1472-7
101.PCMag.com. Software Metrics. (2009). [cited March 2009]. Available from http://www.pcmag.com/encyclopedia_term/0,2542,t=software+metrics&i=5169 0,00.asp.
102.Prechelt L, Malpohl G., & Philippsen M. (2000). Finding plagiarisms among a set of programs with JPlag. Journal of Universal Computer Science.
103.Pressman, R. S. (2000). Software Engineering: A Practitioner''s Approach: McGraw Hill.
104.Reek, K.A, (1989). The TRY system -or- how to avoid testing student programs. In Proceedings ofthe 20th SIGCSE technical symposium on Computer science education, USA, 112–116
105.Reek, K.A, (1996). A software infrastructure to support introductory computer science courses. Proc. 27th SIGCSE Technical Symp. Computer Science Education, ACM Press, 125-129
106.Rintala, M. (2002). Tutnew memory management library. Retrieved November 15, 2004, from http://www.cs.tut.fi/*bitti/tutnew/english/
107.Robin, A., Rountree J. and Rountree N. (2003). “Learning and Teaching Programming: A Review and Discussion, ” Computer Science Education, (33-2), 137-172
108.Roessling, Guido; Malmi, Lauri; Clancy, Michael; Joy, Mike; Kerren, Andreas; Korhonen, Ari; Moreno, Andres; Naps, Thomas; Oeschle, Rainer; Radenski, Atanas; Ross, Rockford; Velasquez-Iturbide, Angel. (2008). Enhancing Learning Management Systems to Better Support Computer Science Education SIGCSE Bulletin Vol. 40(4), 142-166
109.Rohaida Romli , Mawarny Rejab (2006). Penyukatan Automatik Kekompleksan Tugasan Aturcara Java. Paper read at National ICT Conference at Universiti Teknologi MARA, Arau, Perlis, Malaysia.
110.Ross, R. (2008). Hypertextbooks and a Hypertextbook Authoring System. Proceedings of the 13th Conference on Innovation and Technology in Computer Science Education. (Madrid, Spain). ACM Press, New York, NY, USA, 133-137.
111.Rößling, G. & Hartte, S. (2008). WebTasks: Online Programming Exercises Made Easy. Proceedings of the 13th Conference on Innovation and Technology in Computer Science Education Conference. (Madrid, Spain). ACM Press, New York, NY, USA, 363.
112.Saikkonen, R., Malmi , L., & Korhonen, A.. (2001). Fully Automatic Assessment of Programming Exercises. Paper read at ITiCSE2001.
113.Sampson. (2002, June). Accommodating learning styles in adaptation logics for personalisted learning system, 14th Worlds Conference on educational multimedia, Hypermedia and telecommunications (ED-MEDIA 02), Denver, Colorado, USA, 24-29.
114.Sandra, F., Greg, M. and Nils Toms. (1997). Automatic assessment of elementary Standard ML programs using Ceilidh, Journal of Computer Assisted Learning, 13, 99-108.
115.Schorsch, Tom. (1995). CAP: An Automated Self-Assessment Tool To Check Pascal Programs For Syntax, Logic And Style Errors. Paper read at SIGCSE ’95 at Nashville.
116.Shepard, T., Lamb, M., & Kelly, D. (2001). More testing should be taught. Communications of the ACM, 44(6), 103–108.
117.Simon S.J. (2000). The Relationship of Learning Style and Training Method to End-User Computer Satisfaction and Computer Use: A Structural Equation Model, Information Technology, Learning, and Performance Journal, 18(1), 41-59
118.Slowinski, R. (ed.). (1992). Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory. Kluwer Academic Publishers, Dordrecht.
119.Snow, R.E. (1986). Individual Difference in the design of educational programs, American Psychologist(41:10), October 1986, 1029-1039
120.Snow, R. E., (1991). Aptitude-treatment Interaction as a Framework for Research on Individual Differences in Psychotherapy, Journal of Consulting and Clinical Psychology, 59(2), 205-216
121.Spacco J. & Pugh W., (2006). Helping Students Appreciate Test-Driven Development (TDD). In Companion to 21st. ACM SIGPLAN Conf. Object-Oriented Prog. Systems, Languages, and Applications (OOPSLA), 907–913.
122.Stephen H. Edwards. (2003). Improving student performance by evaluating how well students test their own programs. ACM Journal of Educational Resources in Computing, 3, 3, Article 01.
123.Stephen H. Edwards. and Manuel A. (2007). Pérez-Quiñones. Experiences using test-driven development with an automated grader. Journal of Computing Sciences in Colleges. Volume 22, Issue 3.
124.Susan, B and Ronan, R. (2005). Programming: Factors that Influence Success, Proceedings of the 36th SIGCSE technical symposium on Computer science education
125.Symeonidis, P. (1998). An in-depth Review of CourseMaster’s Marking Subsystem.
126.T. Buzan. (2001). Use your head. BBC Books, N. F. Noy and D. L. McGuinness.
127.Taba, H. (1963). Learning by discovery: Psychological and educational rationale, The Elementary School Journal, 63(6), pp. 308-316
128.Tony J. (2001). The Motivation of Students of Programming. ACM SIGCSE, 33(3), 53-56
129.Trætteberg, H. and Aalberg, T. (2006). JExercise: a specificationbased and test-driven exercise support plugin for Eclipse. Proceedings of the 2006 OOPSLA workshop on Eclipse technology eXchange. (Portland, Oregon, USA). ACM Press, New York, NY, USA, 70-74.
130.Truong, N., Roe, P., Bancroft, P. (2004). Static Analysis of Students’ Java Programs. Paper read at 6th Australian Computing Education Conference (ACE2004), at Dunedin, New Zealand.
131.Truong, N., Roe, P., Bancroft, P. (2005). Automated Feedback for “Fill in the Gap” Programming Exercises. Paper read at Australasian Computing Education Conference, at Newcastle, Australia.
132.W. Citrin, R. Hall & B. Zorn. (1995). Programming with Visual Expressions. IEEE Symposium on Visual Languages, Darmstadt, Germany, 294-301.
133.Wells, J.B., Layne, B.H., and Allen, D. (1991). Management development training and learning styles, Public Productivity and Management, 14(4), .415-428.
134.Whale, Geoff. (1986). Detection of Plagiarism in Student Programs. Paper read at 9th Australian Computer Science Conference, at Canberra.
135.Wise, M. J. (1993). String Similarity via Greedy String Tiling and Running-Karp-RabinMatching. [cited March 2009]. http://www.pam1.bcs.uwa.edu.au/~michaelw/ftp/doc/RKR_GST.ps.
136.Yenduri S. & Perkins L. (2006). Impact of Using Test-Driven Development: A Case Study. Software Engineering Research and Practice,pages 126–129.
137.Yerramilli, Susan, A. Mengel, & Vinay. (1999). A Case Study Of The Static Analysis Of The Quality Of Novice Student Programs Paper read at SIGCSE‘99
138.Ziarko, W. (1991). The Discovery, Analysis and Representation of DataDependencies in Databases. In Piatesky-Shapiro, G. and Frawley, W.J. (eds.) Knowledge Discovery in Databases, AAAI Press/MIT Press, 177-195.
QRCODE
 
 
 
 
 
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
無相關期刊