E2.4.1

=Evaluation results are reported regularly in a manner that allows for comparison of the educational effectiveness of e-learning initiatives. =

Evidence
Quality dimensions need to be validated and refined by primary research.

To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55).

Although meta-analysis research suggests that online learning is better than face to face learning and that blended learning is even better (Means et al. 2009) there is an ongoing need to monitor the use of e-learning and ICTs for course delivery because there is as yet no consensus about what constitutes quality e-learning (Usoro & Abid 2008). These authors state that, ‘effective quality strategies, initiatives and tools are very important for convincing lecturers and other stakeholders to adopt e-learning’ (p. 80). Kidney et al. (2007) believe that, ‘a quality online course would be the direct result of a course creation process that included quality assurance strategies’ (p. 18).

The results of the evaluations should be used to inform ongoing and new development, and to support resources and strategy.

The importance of sharing feedback information is emphasised by Ravitz and Hoadley (2005) who propose a systematic approach to reviewing e-learning as professional development: ‘this model of systematic review…holds the potential to change feedback systems among stakeholder groups in online resource development’ (p. 968).

Attwell (2006) further notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning.

Resources
Ellis et al. (2007) discuss when to undertake evaluations (five stage guide). They also describe a student-focused strategy for evaluation of e-learning. Ellis et al. provide a number of tables and diagrams describing the sort of stages that the evaluation process can act upon and the sorts of data that might be collected. Data can be collected on integration, course websites, users, and support. ‘Dissemination of the implications of the evaluations is necessary if improvement is to occur’ (p. 7).