E3.2.2

=Regular reviews are conducted formally as part of the normal procedures for delivering courses using e-learning technologies and pedagogies. =

Evidence
The dependence of e-learning on the use of an appropriate pedagogy and well-designed technology means that when assessing the success of courses and projects it is very important to ensure that the effectiveness of the technology is also formally measured. Evidence of success or limitations in the local context is an important factor in ensuring the efficient design and development of existing and new courses and projects.

Validation of e-learning processes and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55).

Ravitz and Hoadley (2005) discuss links between systematic review and professional development and they identify needs for stakeholders that include: Quality resources for teachers; reliable programmes for policymakers and evaluators; and, refined tools that are appropriately distributed, for developers. ‘These issues map onto three ongoing and related challenges: (1) professional development or training for using online resources, (2) evaluation of resources for purposes of research and development, and (3) dissemination and reuse of knowledge and practices related to knowledge management and metadata’ (p. 958).

Ravitz and Hoadley’s proposal for a systematic review approach aims for a more collaborative and cumulative understanding of e-learning facilities and resources. They argue that the complex e-learning environment calls for stakeholders to continually learn about and share experiences and understandings: ‘analysis of resources must include not just consideration of basic qualities of web design, but also awareness of the structures and processes that provide opportunities for teacher and student learning, and consideration of artifacts of resource use such as examples of student work, project ideas, lesson plans or rubrics’ (p. 959).

An integrated approach to evaluating e-learning is important for improving quality and effectiveness and verifying design assumptions (Bastiaens et al, 2004). Bastiaens et al. discuss the need for a multi-level simultaneous evaluation approach that incorporates reactions to learning experiences, learning process results, learning performance changes, and organisational results. They comment that a four level evaluation is unnecessary for every event, but recommend that reactions are considered when implementing new learning events (p. 197).

Attwell (2006) notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning. There are also surprisingly few return on investment studies. There is concern that e-learning is sometimes not succeeding in the way that had been expected, hence the need for evaluation and refinement. Attwell points out the ongoing need for technical developers and evaluators to engage in dialogue. Attwell identifies five areas of evaluation: individual learner variables, learning environment variables, contextual variables, technology variables, pedagogic variables. There is a need to undertake interpretative and analysis studies rather than merely descriptive ethnographic studies when evaluating e-learning.

Attwell describes the context that managers of e-learning are operating in: ‘Managers… are having to make decisions about the introduction and use of e-learning when e-learning itself is still in a stage of rapid evolution and instability. Major paradigm shifts are taking place in the pedagogical thinking underpinning e-learning, new ideas and policies are emerging on how e-learning should be developed and financed and there are continuing advances in information and communication technologies. It is in this context that managers are having to make decisions about investing in e-learning and one in which the consequences of making the wrong decisions are increasingly costly’ (p. 40).

Resources
Four types of evaluation: context, input, process, product. Attwell (2006) describes a tool for the evaluation of e-learning based on these (derived from Stufflebeam’s CIPP model, http://www.wmich.edu/evalctr/checklists/cippchecklist.htm ).

Ellis et al. (2007) discuss when to undertake evaluations (five stage guide). They also describe a student-focused strategy for evaluation of e-learning. Ellis et al. provide a number of tables and diagrams describing the sort of stages that the evaluation process can act upon and the sorts of data that might be collected. Data can be collected on integration, course websites, users, and support. ‘Dissemination of the implications of the evaluations is necessary if improvement is to occur’ (p. 7).

In a case-study on managing improvement in e-learning Ellis et al. (2007) describe a model for reviewing the course development and teaching processes of a university including reflection (especially cyclical periods of reflection in which initial outcomes are compared to subsequent outcomes). This encourages understanding and development of the processes and improvements to emerge.

http://net.educause.edu/ir/library/pdf/EQM0635.pdf

Ehlers (2009) on web 2.0: E-learning 1.0 follows a broadcasting logic. E-learning 2.0 emphasizes participation. No longer is one LMS used as a material island, but rather a LMS should be seen as a gate leading to the web. The risk of simply placing all the material on a common LMS is that it will become a ‘data grave’ (Kerres 2006). Because most learning takes place in an informal setting (talking to colleagues, or browsing subject related literature) informal learning needs to be embraced by learning institutions. Given this situation, developing quality needs to focus on learning processes and demonstrated achievements rather than specific learning materials. Elhers (2009) identifies the conditions and subjects of quality assessment in web 2.0 e-learning settings. Self-evaluation, reflection and peer-evaluation become more important. Quality review needs to take account of active participation, learner reflection and support of this process, individually developed learner products, validating social communication processes, and individual e-portfolios rather than just LMS materials.

Ehlers (2009) notes that another method to promote quality of learning materials is through social recommendation mechanisms. This tags which material is thought by users to be especially useful. Ranking of learning material based on learners’ evaluations could track quality.