|
1.Aiken, A. (2007). A System for Detecting Software Plagiarism. [cited March 2009]. http://theory.stanford.edu/~aiken/moss/. 2.Ala-Mutka, K, Uimonen T., Järvinen H. M. (2004). Supporting Students in C++ Programming Courses with Automatic Program Style Assessment. Journal of Information Technology Education,Vol 3. 3.Ala-Mutka, Kirsti M. (2005, June). A Survey of Automated Assessment Approaches for Programming Assignments. Computer Science Education 15, 83-102. 4.Allowatt, A. & Edwards, S.H. (2004). IDE support for test-driven development and automated grading in both Java and C++, Proc. Eclipse Technology Exchange (eTX) Workshop at OOPSLA, ACM, 100-104. 5.Ausubel, D.P. (1963). The psychology of meaningful verbal learning, New York, Grune and Stratton 6.Badros, Greg. (2000). JavaML: An XML-based Source Code Representation for Java Programs 2000 [cited March 2009]. http://badros.com/greg/JavaML/. 7.Bandura, A. (1986). Social foundations of thought and action: a social cognitive theory, Englewood Cliffs, NJ: Prentice-Hall 8.Barriocanal E., Urban M., Cuevas I., & Perez P., (2003). An Experience in Integrating Automated Unit Testing Practices in an IntroductoryProgramming Course. ACM SIGCSE Bulletin, 34(4), 125–128. 9.Bergin S., & Reilly R., (2005). Programming: Factors that Influence Success, Technical Symposium on Computer Science Education, Proceedings of the 36th SIGCSE technical symposium on Computer Science Education, 411-415. 10.Bostrom, R.P. (1990), The importance of learning style in End-User training, MIS Quarterly 14(1), pp.101-119 11.Brenda C., Andy K., Andrew Lim & Wee-Chong O. (2004). On Automated Grading Of Programming Assignments In An Academic Institution, Computers and Education, 121-131 12.Bruner, J. (1996). Toward a theory of instruction. New York: W.W.Norton 13.Brynda, J. (1992). End User Training: Lend me your ear, I will teach you a PC package, Computing Canada, 18(1), 48 14.Buzan. (1991). The Mind Map Book: Mind Mapping Guidelines, Penguin, New York. 15.Byrne, P. & Lyons, G. (2001). The Effect of Student Attributes on Success in Programming, ITiCSE: Proceedings of the 6th annual conference on Innovation and technology in computer science. ACM press, 49-52 16.C.P. Wadsworth. (1971). Semantics and Pragmatics of the Lambda Calculus, D.Phil. Thesis, Oxford University. 17.Carter, J., English, J., Ala-Mutka, K., Dick, M., Fone, W., Fuller, U., & Sheard, J. (2003). How shall we assess this? ACM SIGCSE Bulletin, 35(4), 107-123. 18.Champollion, L., J. Tauberer and M. Romero. (2007, July). The Penn Lambda Calculator: Pedagogical Software for Natural Language Semantics, in T. Holloway King and E. M. Bender (eds.), Proceedings of the Grammar Engineering across Frameworks (GEAF). 19.Chen, P. (2004). An Automated Feedback System for Computer Organization Projects. IEEE Transactions on Education, 47, 232–240. 20.Chetan Desai, Davis Janzen & Kyle Savage, (2008). A Survey of Evidence for Test-Driven Development in Academia, SIGCSE Bullentin, 40(2). 21.Chou H.W. & Wang W.B. (2000). The influence of learning style and training method on self-efficacy and learning performance in WWW homepage design training, International Journal of Information Management (20:6) , 455-472 22.Christopher, D., David, L. & Jams, O. (2005, September). Automatic Test-Based Assessment of Programming: A Review, ACM Journal of Educational Resources in Computing, 5(3), Article 4. 23.Christopher G. Jones, (2004). Test-driven development goes to school, Journal of Computing Sciences in Colleges, 20(1), 220-231. 24.Cross, K.P. (1981). Adults as learners: increasing participation and facilitating learning, Jossey-Bass, Inc., San Francisco, CA. 25.David S. Janzen & Hossein Saiedian. (2006, March). Test-Driven Learning: Intrinsic Integration of Testing into the CS/SE Curriculum, SIGCSE 6, Houston,Texas, USA. 26.Davis, F.D. & Yi, M.Y. (2004), Improving Computer Skill Training: Behavior Modeling, Symbolic Mental Rehearsal, and the Role of Knowledge Structures, Journal of Applied Psychology, 89(3), 509-523 27.Davis, S.D. (1989). Training novice users of computer systems: the roles of the computer interface, training methods and learner characteristics, doctoral dissertation, Indiana University Bloomingtom, IN 28.Deitel, H.M. & Deitel P.J. (2004). C How to Program Fourth Edition, Prentice Hall. 29.Dimitry Polivaev. (2008). FreeMind - free mind mapping software, FreeMind Official Homepage & Wiki [cited March 2009] , Available at http://freemind.sourceforge.net/wiki/index.php/Main_Page 30.Don Colton., Leslie Fife., and Andrew Thompson., (2006). A Web-based Automatic Program Grader, Proc ISECON 2006, v23. 31.Dunn, R., (2000). Capitalizing on college students’ learning styles: theory, practice, and research, in Practice Approaches to using learning style in higher education. 32.Edward L. Jones. (2001). Grading Student Programs - A Software Testing Approach, Journal of Computing in Small Colleges 16(2), 195-192. 33.Edwards, S.H. (2003). Improving student performance by evaluating how well students test their own programs. J. Educational Resources in Computing, 3(2), 1-24. 34.Edwards, S.H. (2003). Teaching software testing: automatic grading meets test-first coding. In proceedings of the OOPSLA’03 conference. Poster presentation, 318-319 35.Edwards S., (2004). Using Software Testing to Move Students from TrialandError to ReflectioninAction.ACM SIGCSE Bulletin, 36(1), 26–30. 36.Ellsworth, C., Fenwick, J., & Kurtz, B. (2004). The Quiver System. In Proceedings of the 35thSIGCSE technical symposium on Computer Science Education, US, 205–209. 37.English, J. (2004). Automatic Assessment of GUI Programs using JEWL. In Proceedings of 9th annual conference on Innovation and technology in computer science education, UK, 137–141. 38.Erdogmus, H., Morisio, M., and Torchiano,M. (2005, March). On the Effectiveness of the TestFirst Approach to Programming. IEEE Trans. SoftwareEng., 31(3), 226–237. 39.Esmond Pitt and Kathleen McNiff. (2001). java.rmi: The Guide to Remote Method Invocation. Addison-Wesley, Boston, MA. 40.Foxley, E. Higgins, C, & Gibbon, C. (1996). The Ceilidh System : A General Overview. [cited March 2009]. http://www.cs.nott.ac.uk/~cmp/more_info/html/Overview96.htm 41.Foxley, E. (1999). Ceilidh Documentation on the World Wide Web. [cited March 2009], http://www.cs.nott.ac.uk/~ceilidh/papers.html. 42.Gagnone, Hendren L J. (1998). SableCC-an Object-oriented Compiler Framework. Proceedings of Tools 26: Technology of Object-Oriented Languages. 43.Glaster, R., Variables In Discovery Learning. In L.S. Sgulman, & E.R. Keislar, (Eds.).(1974). Learning By Discovery: A Critical Appraisal, Chicago: Rand McNally, 1966Supervisor Behavior, New YORK: Pergamon 44.Goldwasser, M.H. (2002). A gimmick to integrate software testing throughout the curriculum. In Proc. 33rd. SIGCSE Technical Symp. Computer Science Education, ACM Press, 271-275. 45.Goold, A. and Rimmer R. (2000). Factors Affecting Performance in First-year Computing,” SIGCSE Bull., 32(2), 39-43 46.Granville, Andrew. (2002). Detecting Plagiarism in Java Code, University of Sheffield. [cited March 2009] . http://www.dcs.shef.ac.uk/intranet/teaching/projects/archive/ug2002/pdf/u9arg.pdf 47.Gruber, T.R. (1993).A Translation Approach to Portable Ontology Specification, Knowledge Acquisition 5: 199-220. 48.H. Barendregt. (1992). Lambda Calculi with Types. Handbook of Logic in Computer Science, Oxford University Press. 49.Hansen, H., & Ruuska, M. (2003). Assessing time-efficiency in a course on data structures and algorithms. In Proceedings of the 3rd Annual Finnish/Baltic Sea Conference on Computer Science Education, Finland. 50.Higgins, C., Hergazy, T., Symeonidis, P., & Tsinsifas, A. (2003). The CourseMarker CBA system: Improvements over Ceilidh. Education and Information Technologies, 8, 287–304. 51.Hilburn, T.B., & Towhildnejad, M. (2000). Software quality: A curriculum postscript? In Proc. 31st SIGCSE Technical Symp. Computer Science Education, ACM Press, 167-171. 52.Hine, N., Rentoul, R. & Specht, M. (2004).Collaboration and Roles in Remote Field Trips. In Attewell, J. and Savill-Smith, C. Eds.Learning with Mobile Devices: Research and Development 2004. Learning and Skills Development Agency, London, UK, 69-72. 53.Isong, J. (2001). Developing an automated program checker. J. Computing in Small Colleges, 16(3), 218-224. 54.Jackson, D., & Usher, M. (1997). Grading Student Programs using ASSYST. Proceedings of the 28th SIGCSE technical symposium on Computer science education, USA, 335–339. 55.Jackson, D., & Usher, M. (1997). Grading student programs using ASSYST. In Proc. 28th SIGCSE Technical Symp. Computer Science Education, ACM Press, 335-339. 56.Janzen D. & Saiedian H., (2007). A Leveled Examination of Test-Driven Development Acceptance. In Proc. 29th Int’ l Conf. on Software Engineering (ICSE), 719–722. 57.Janzen D. and Saiedian H., (2005). Test-Driven Development: Concepts, Taxonomy, and Future Direction. IEEE Computer, 38(9), 43–50. 58.Janzen D. and Saiedian H., (2006). Test-Driven Learning: Intrinsic Integration of Testing into the CS/SE Curriculum. In Proc. 37th Technical Symposium on Computer Science Education (SIGCSE), 254–258. 59.Janzen D. and Saiedian H., (2008). Test-Driven Learning in Early Programming Courses. In Proc. 39th Technical Symposium on ComputerScience Education (SIGCSE). ACM. 60.Jeffries R. and Melnik G., (2007). TDD: The Art of Fearless Programming. IEEE Software, 24(3), 24–30. 61.John W. Budd., (2004). Mind Maps as Classroom Exercise, Journal of Economic Education 62.Jones, E.L. (2000a). Software testing in the computer science curriculum—a holistic approach. In Proc. Australasian Computing Education Conf., ACM Press, 153-157. 63.Jones, E.L (2000b). SPRAE: A framework for teaching software testing in the undergraduate curriculum. In Proc. ADMI 2000, Hampton, VA, 1-4 June 2000. 64.Jones, E.L. (2001a). Integrating testing into the curriculum—arsenic in small doses. In Proc. 32nd SIGCSE Technical Symp. Computer Science Education, ACM Press, 337-341. 65.Jones, E.L. (2001b). An experiential approach to incorporating software testing into the computer science curriculum. In Proc. 2001 Frontiers in Education Conf. (FiE 2001), F3D7-F3D11. 66.Jones, E.L. (2000). Grading student programs—a software testing approach. Computing in Small Colleges, 16(2), 185-192. 67.Jones, C., (2004). Test-Driven Development Goes to School. Journal of Computing Sciences in Colleges, 20(1), 220–231. 68.Joy, M. & Luck, M. (1999). Plagiarism in Programming Assignments. IEEE Transactions on Education, 42(1) , 129-133 69.Joy, M & Luck, M. (1998). The BOSS System for On-line Submission and Assessment [cited 2007]. http://www.ulster.ac.uk/cticomp/joy.html 70.Kaufmann R. and Janzen D. (2003). Implications of Test-Driven Development: A Pilot Study. In Companion of the 18th Ann. ACM SIGPLAN Conf. ObjectOriented Programming, Systems, Languages, and Applications, pages 298–299. 71.Keefe K., Sheard, J. & Dick M., (2006). Adopting XP Practices for Teaching Object Oriented Programming. In Proc. 8th Australian Conf.Computing Education, volume 52, pages 91100. 72.Khirulnizam A. Rahman, & Md. Jan Nordin. (2007). A Review on the Static Analysis Approach in the Automated Programming Assessment Systems. In: National Conference on Programming 07, 5 December 2007, Kuala Lumpur, Malaysia. 73.Kolb, D.A. and Fry, R. (1975). Toward an applied theory of experiential learning. In Thories of Group Process, G.L. Cooper(ed.), John Wiley and Sons, Inc., New York, NY, 33-54 74.Kolb, D.A. (1976). The Learning Style Inventory Technical Manual, Mcber and Company, Boston, MA 75.Kolb, D.A. and Fry, R. (1995). Toward an applied theory of experiential learning, in Theories of Group Processes, G.L. Cooper (ed.), John Wiley and Sons, Inc., New York, NY, 33-54 76.Kolb A.Y., Kolb D.A. (2005). The Kolb’s learning style inventory-version 3.1 2005 technical specifications, Boston, MA: Hay Resource Direct. 77.L Thomas, M Ratcliffe, J Woodbury, E Jarman, (2002). Learning Styles and Performance in the Introductory Programming Sequence, Proceedings of the 33rd SIGCSE technical symposium on ACM 78.Luck, M. & Joy, M. (1999). A secure on-line submission system. Software—Practice and Experience, 29(8), 721-740. 79.M. Erwig. (1998). Abstract Syntax and Semantics of Visual Languages. Journal of Visual Languages and Computing, 9(5):461–483. 80.M.R. Sleep, M.J. Plasmeijer, M.C.J.D. van Eekelen. (1993). Term Graph Rewriting—Theory and Practice,Wiley, Chichester. 81.Malmi, L., Karavirta, V., Korhonen, A., Nikander, J., Seppälä, O. and Silvasti, P. (2004). Visual Algorithm Simulation Exercise System with Automatic Assessment: TRAKLA2. Informatics in Education, 3(2), 267-288. 82.Marini Abu Bakar, Norleyza Jailani, Sufian Idris. (2002). Pengaturcaraan C. Kuala Lumpur: Prentice Hall. 83.McCabe, T. J. (1976). A Complexity Measure. IEEE Transaction of Software Engineering 4 (SE-2):308-320. 84.Mccauley, R., Archer, C., Dale, N., Mili, R., RobergÉ, J., and Taylor, H. (1995). The effective integration of the software engineering principles throughout the undergraduate computer science curriculum. In Proc. 26th SIGCSE Technical Symp. Computer Science Education, ACM Press, 364-365. 85.Mccauley, R., Dale, N., Hilburn, T., Mengel, S., and Murrill, B.W. (2000). The assimilation of software engineering into the undergraduate computer science curriculum. In Proc. 31st SIGCSE Technical Symp. Computer Science Education, ACM Press, 423-424. 86.McGuinness, D.L. and Harmelen, F. van eds. OWL Web Ontology Language Overview, World Wide Web Consortium (W3C) recommendation, [cited March 2009]. Available at http://www.w3.org/TR/owl-features. 87.McQuain, W. (2004). Curator: An electronic submission management environment. Web page last updated at July 15. (http://courses.cs.vt.edu/curator/) 88.Melnik G. and Maurer F., (2005). A CrossProgram Investigation of Students’ Perceptions of Agile Methods. In Proc. 27th Int’ l Conf. on Software Eng. (ICSE), 481–488. 89.Mengel, S.A., Yerramilli, V. (1999). A case study of the static analysis of the quality of novice student programs. In Proc. 30th SIGCSE Technical Symp. Computer Science Education, ACM, 78-82. 90.Michael, Jeronimo, Jack Weast. UPnP Design by Example: A Software Developer''s Guide to Universal Plug and Play, Intel Press, ISBN 0-9717861-1-9 91.Mitrovic, A. (1998). Learning SQL with a computerized tutor. SIGCSE Bulletin, 30( 1), 307-311 92.Morris, D. (2003). Automatic Grading of Student’s Programming Assignments: An Interactive Process and Suite of Programs. In Proceedings of the 33rd ASEE/IEEE Frontiers in Education Conference, S3F-1–S3F-5. 93.Muller M. and Hagner O. (2002). Experiment About Test-First Programming. IEEE Proc. Software, 149(5), 131–136. 94.Muller M. & Tichy E., (2001). Case Study: Extreme Programming in a University Environment. In Proc. 23th Int’ l Conf. on Software Eng.(ICSE), 537–544. 95.Naps, T. L. 2005. JHAVÉ -- Addressing the Need to Support Algorithm Visualization with Tools for Active Engagement. IEEE Computer Graphics and Applications, 25( 5), 49-55. 96.Norman, D.A. (1983). “Some Observations on Mental Models”, Mental Models, A.L. Stevens and D. Genter (ends.), Lawrence Kawrence Erlbaum Associates, Hillsdale, NJ, 7-1 97.Norshuhani Zamin, Emy Elyanee Mustapha, Savita K.Sugathan, Mazlina Mehat, Ellia, and Anuar. (2006). Development Of A Web-Based Automated Grading System For Programming Assignments Using Static Analysis Approach. Paper read at International Conference on Electrical and Informatics, at Bandung, Indonesia. 98.Pancur, M.M., Ciglaric, M. Trampus, and Vidmar T. (2003). Towards Empirical Evaluation of TestDriven Development in a University Environment. In IEEE Region 8 Proc. EUROCON, volume 2, 83–86. 99.Parlante, N. JavaBat java practice problems. (2009). [cited March 2009]. available from http://javabat.com. 100.Pawlak, Zdzisław. (1991). Rough Sets: Theoretical Aspects of Reasoning About Data. Dordrecht: Kluwer Academic Publishing. ISBN 0-7923-1472-7 101.PCMag.com. Software Metrics. (2009). [cited March 2009]. Available from http://www.pcmag.com/encyclopedia_term/0,2542,t=software+metrics&i=5169 0,00.asp. 102.Prechelt L, Malpohl G., & Philippsen M. (2000). Finding plagiarisms among a set of programs with JPlag. Journal of Universal Computer Science. 103.Pressman, R. S. (2000). Software Engineering: A Practitioner''s Approach: McGraw Hill. 104.Reek, K.A, (1989). The TRY system -or- how to avoid testing student programs. In Proceedings ofthe 20th SIGCSE technical symposium on Computer science education, USA, 112–116 105.Reek, K.A, (1996). A software infrastructure to support introductory computer science courses. Proc. 27th SIGCSE Technical Symp. Computer Science Education, ACM Press, 125-129 106.Rintala, M. (2002). Tutnew memory management library. Retrieved November 15, 2004, from http://www.cs.tut.fi/*bitti/tutnew/english/ 107.Robin, A., Rountree J. and Rountree N. (2003). “Learning and Teaching Programming: A Review and Discussion, ” Computer Science Education, (33-2), 137-172 108.Roessling, Guido; Malmi, Lauri; Clancy, Michael; Joy, Mike; Kerren, Andreas; Korhonen, Ari; Moreno, Andres; Naps, Thomas; Oeschle, Rainer; Radenski, Atanas; Ross, Rockford; Velasquez-Iturbide, Angel. (2008). Enhancing Learning Management Systems to Better Support Computer Science Education SIGCSE Bulletin Vol. 40(4), 142-166 109.Rohaida Romli , Mawarny Rejab (2006). Penyukatan Automatik Kekompleksan Tugasan Aturcara Java. Paper read at National ICT Conference at Universiti Teknologi MARA, Arau, Perlis, Malaysia. 110.Ross, R. (2008). Hypertextbooks and a Hypertextbook Authoring System. Proceedings of the 13th Conference on Innovation and Technology in Computer Science Education. (Madrid, Spain). ACM Press, New York, NY, USA, 133-137. 111.Rößling, G. & Hartte, S. (2008). WebTasks: Online Programming Exercises Made Easy. Proceedings of the 13th Conference on Innovation and Technology in Computer Science Education Conference. (Madrid, Spain). ACM Press, New York, NY, USA, 363. 112.Saikkonen, R., Malmi , L., & Korhonen, A.. (2001). Fully Automatic Assessment of Programming Exercises. Paper read at ITiCSE2001. 113.Sampson. (2002, June). Accommodating learning styles in adaptation logics for personalisted learning system, 14th Worlds Conference on educational multimedia, Hypermedia and telecommunications (ED-MEDIA 02), Denver, Colorado, USA, 24-29. 114.Sandra, F., Greg, M. and Nils Toms. (1997). Automatic assessment of elementary Standard ML programs using Ceilidh, Journal of Computer Assisted Learning, 13, 99-108. 115.Schorsch, Tom. (1995). CAP: An Automated Self-Assessment Tool To Check Pascal Programs For Syntax, Logic And Style Errors. Paper read at SIGCSE ’95 at Nashville. 116.Shepard, T., Lamb, M., & Kelly, D. (2001). More testing should be taught. Communications of the ACM, 44(6), 103–108. 117.Simon S.J. (2000). The Relationship of Learning Style and Training Method to End-User Computer Satisfaction and Computer Use: A Structural Equation Model, Information Technology, Learning, and Performance Journal, 18(1), 41-59 118.Slowinski, R. (ed.). (1992). Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory. Kluwer Academic Publishers, Dordrecht. 119.Snow, R.E. (1986). Individual Difference in the design of educational programs, American Psychologist(41:10), October 1986, 1029-1039 120.Snow, R. E., (1991). Aptitude-treatment Interaction as a Framework for Research on Individual Differences in Psychotherapy, Journal of Consulting and Clinical Psychology, 59(2), 205-216 121.Spacco J. & Pugh W., (2006). Helping Students Appreciate Test-Driven Development (TDD). In Companion to 21st. ACM SIGPLAN Conf. Object-Oriented Prog. Systems, Languages, and Applications (OOPSLA), 907–913. 122.Stephen H. Edwards. (2003). Improving student performance by evaluating how well students test their own programs. ACM Journal of Educational Resources in Computing, 3, 3, Article 01. 123.Stephen H. Edwards. and Manuel A. (2007). Pérez-Quiñones. Experiences using test-driven development with an automated grader. Journal of Computing Sciences in Colleges. Volume 22, Issue 3. 124.Susan, B and Ronan, R. (2005). Programming: Factors that Influence Success, Proceedings of the 36th SIGCSE technical symposium on Computer science education 125.Symeonidis, P. (1998). An in-depth Review of CourseMaster’s Marking Subsystem. 126.T. Buzan. (2001). Use your head. BBC Books, N. F. Noy and D. L. McGuinness. 127.Taba, H. (1963). Learning by discovery: Psychological and educational rationale, The Elementary School Journal, 63(6), pp. 308-316 128.Tony J. (2001). The Motivation of Students of Programming. ACM SIGCSE, 33(3), 53-56 129.Trætteberg, H. and Aalberg, T. (2006). JExercise: a specificationbased and test-driven exercise support plugin for Eclipse. Proceedings of the 2006 OOPSLA workshop on Eclipse technology eXchange. (Portland, Oregon, USA). ACM Press, New York, NY, USA, 70-74. 130.Truong, N., Roe, P., Bancroft, P. (2004). Static Analysis of Students’ Java Programs. Paper read at 6th Australian Computing Education Conference (ACE2004), at Dunedin, New Zealand. 131.Truong, N., Roe, P., Bancroft, P. (2005). Automated Feedback for “Fill in the Gap” Programming Exercises. Paper read at Australasian Computing Education Conference, at Newcastle, Australia. 132.W. Citrin, R. Hall & B. Zorn. (1995). Programming with Visual Expressions. IEEE Symposium on Visual Languages, Darmstadt, Germany, 294-301. 133.Wells, J.B., Layne, B.H., and Allen, D. (1991). Management development training and learning styles, Public Productivity and Management, 14(4), .415-428. 134.Whale, Geoff. (1986). Detection of Plagiarism in Student Programs. Paper read at 9th Australian Computer Science Conference, at Canberra. 135.Wise, M. J. (1993). String Similarity via Greedy String Tiling and Running-Karp-RabinMatching. [cited March 2009]. http://www.pam1.bcs.uwa.edu.au/~michaelw/ftp/doc/RKR_GST.ps. 136.Yenduri S. & Perkins L. (2006). Impact of Using Test-Driven Development: A Case Study. Software Engineering Research and Practice,pages 126–129. 137.Yerramilli, Susan, A. Mengel, & Vinay. (1999). A Case Study Of The Static Analysis Of The Quality Of Novice Student Programs Paper read at SIGCSE‘99 138.Ziarko, W. (1991). The Discovery, Analysis and Representation of DataDependencies in Databases. In Piatesky-Shapiro, G. and Frawley, W.J. (eds.) Knowledge Discovery in Databases, AAAI Press/MIT Press, 177-195.
|