E2.2.2

=E–learning design and (re)development procedures include explicit evaluation phases assessing the quality and effectiveness of e-learning. =

Evidence
Attwell (2006) provides a comprehensive guide with examples of tools that can be used in the evaluation of e-learning programs. Attwell emphasizes the need for iteration between theory and practice. This means courses must be redesigned in an ongoing manner according to evaluation. Also, due to the initial costs of implementing e-learning programs it is important to conduct ongoing evaluation. Attwell further notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning. There are also surprisingly few return on investment studies. There is concern that e-learning is sometimes not succeeding in the way that had been expected, hence the need for evaluation and refinement. Attwell points out the ongoing need for technical developers and evaluators to engage in dialogue.

Resources
In a case-study on managing improvement in e-learning Ellis et al. (2007) describe a model for reviewing the course development and teaching processes of a university including reflection (especially cyclical periods of reflection in which initial outcomes are compared to subsequent outcomes). This encourages understanding and development of the processes and improvements to emerge.

Ravitz and Hoadley (2005) propose a systematic approach to reviewing e-learning as professional development: ‘this model of systematic review…holds the potential to change feedback systems among stakeholder groups in online resource development’ (p. 968).

Ellis et al. (2007) note that any model used for the quality assurance of learning in higher education must have a theoretical base. The research underpinning such a model could align with evidence-based, student-centred approaches to learning, including constructivism.

Evidence of capability in this practice is seen in the inclusion of a formal student evaluation plan in the design and development of projects and courses. This plan should include conducting multiple formal evaluations, both summative and formative, in a standard way that allows for comparison of results between projects and over time.

Ellis et al. (2007) discuss when to undertake evaluations (five stage guide). They also describe a student-focused strategy for evaluation of e-learning. Ellis et al. provide a number of tables and diagrams describing the sort of stages that the evaluation process can act upon and the sorts of data that might be collected. Data can be collected on integration, course websites, users, and support. ‘Dissemination of the implications of the evaluations is necessary if improvement is to occur’ (p. 7).

http://opq.monash.edu.au/index.html

http://www.qaa.ac.uk/reviews/institutionalAudit/outcomes/OutcomesStudentRep.asp

http://www.rmit.edu.au/browse;ID=9pp3ic9obks7