E3.2.3

=E-learning design and (re)development procedures include formal plan for assessing the success of new technologies or pedagogies. =

Evidence
The dependence of e-learning on the use of an appropriate pedagogy and well-designed technology means that when assessing the success of courses and projects it is very important to ensure that the effectiveness of the technology is also formally measured. Evidence of success or limitations in the local context is an important factor in ensuring the efficient design and development of existing and new courses and projects.

Validation of e-learning processes and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55).

An integrated approach to evaluating e-learning is important for improving quality and effectiveness and verifying design assumptions (Bastiaens et al, 2004). Bastiaens et al. discuss the need for a multi-level simultaneous evaluation approach that incorporates reactions to learning experiences, learning process results, learning performance changes, and organisational results. They comment that a four level evaluation is unnecessary for every event, but recommend that reactions are considered when implementing new learning events (p. 197).

Attwell (2006) notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning. There are also surprisingly few return on investment studies. There is concern that e-learning is sometimes not succeeding in the way that had been expected, hence the need for evaluation and refinement. Attwell points out the ongoing need for technical developers and evaluators to engage in dialogue. Attwell identifies five areas of evaluation: individual learner variables, learning environment variables, contextual variables, technology variables, pedagogic variables. There is a need to undertake interpretative and analysis studies rather than merely descriptive ethnographic studies when evaluating e-learning.

Attwell describes the context that managers of e-learning are operating in: ‘Managers… are having to make decisions about the introduction and use of e-learning when e-learning itself is still in a stage of rapid evolution and instability. Major paradigm shifts are taking place in the pedagogical thinking underpinning e-learning, new ideas and policies are emerging on how e-learning should be developed and financed and there are continuing advances in information and communication technologies. It is in this context that managers are having to make decisions about investing in e-learning and one in which the consequences of making the wrong decisions are increasingly costly’ (p. 40).

Resources
Four types of evaluation: context, input, process, product. Attwell (2006) describes a tool for the evaluation of e-learning based on these (derived from Stufflebeam’s CIPP model, http://www.wmich.edu/evalctr/checklists/cippchecklist.htm ).

In a case-study on managing improvement in e-learning Ellis et al. (2007) describe a model for reviewing the course development and teaching processes of a university including reflection (especially cyclical periods of reflection in which initial outcomes are compared to subsequent outcomes). This encourages understanding and development of the processes and improvements to emerge.

http://net.educause.edu/ir/library/pdf/EQM0635.pdf

Ehlers (2009) on Web 2.0: E-learning 1.0 follows a broadcasting logic. E-learning 2.0 emphasizes participation. No longer is one LMS used as a material island, but rather a LMS should be seen as a gate leading to the web. The risk of simply placing all the material on a common LMS is that it will become a ‘data grave’ (Kerres 2006). Because most learning takes place in an informal setting (talking to colleagues, or browsing subject related literature) informal learning needs to be embraced by learning institutions. Given this situation, developing quality needs to focus on learning processes and demonstrated achievements rather than specific learning materials. Ehlers (2009) identifies the conditions and subjects of quality assessment in web 2.0 e-learning settings. Self-evaluation, reflection and peer-evaluation become more important. Quality review needs to take account of active participation, learner reflection and support of this process, individually developed learner products, validating social communication processes, and individual e-portfolios rather than just LMS materials.

Ehlers (2009) notes that another method to promote quality of learning materials is through social recommendation mechanisms. This tags which material is thought by users to be especially useful. Ranking of learning material based on learners’ evaluations could track quality.