An evaluation of semi-automated, collaborative marking and feedback systems: Academic staff perspectives

  • Steven Burrows RMIT University
  • Mark Shortis RMIT University

Abstract

Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to date with the state of the art from both academia and industry is essential. This paper is motivated by a project aiming to identify a marking and feedback system for deployment at the authors' university. A detailed investigation is described which is open minded towards adopting or modifying an existing product, or the implementation of a new solution, with key features and shortcomings described in detail. Moodle Workshops, Turnitin GradeMark, Waypoint and WebMark were shortlisted and carried forward for user analysis testing. The outcomes have not only provided key conclusions concerning the suitability of existing solutions, but resulted in a comprehensive collection of functional requirements that leaders of new projects should consider. This paper should be of interest for anyone considering the adoption or upgrade of any marking and feedback system at their home institution.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Author Biographies

Steven Burrows, RMIT University
School of Computer Science and Information Technology
RMIT University
Mark Shortis, RMIT University
School of Mathematical and Geospatial Sciences, RMIT University
Published
2011-11-27
How to Cite
Burrows, S., & Shortis, M. (2011). An evaluation of semi-automated, collaborative marking and feedback systems: Academic staff perspectives. Australasian Journal of Educational Technology, 27(7). https://doi.org/10.14742/ajet.909