E3.1.2

=Reviews of course e-learning teaching activities are conducted regularly. =

Evidence
In addition to the evaluations of projects and courses (processes E1 and E2), there is a range of other data available through the standard technologies in use, such as LMSs, that can be effectively used to assess the impact a given use of technology is having on students. This data, while limited in some respects, has the advantage of being comparatively easy to collect, empirical in nature and independent of many aspects of opinion and bias that can complicate other evaluations (Bates and Poole, 2003). Similarly, while it can be challenging to do so accurately, costings and comparisons with alternative delivery approaches are essential for effective management of e-learning (Inglis, 2003; Jung, 2003).

Attwell (2006) describes the context that managers of e-learning are operating in: ‘Managers… are having to make decisions about the introduction and use of e-learning when e-learning itself is still in a stage of rapid evolution and instability. Major paradigm shifts are taking place in the pedagogical thinking underpinning e-learning, new ideas and policies are emerging on how e-learning should be developed and financed and there are continuing advances in information and communication technologies. It is in this context that managers are having to make decisions about investing in e-learning and one in which the consequences of making the wrong decisions are increasingly costly’ (p. 40).

Ellis et al. (2007) note that any model used for the quality assurance of learning in higher education must have a theoretical base. The research underpinning such a model could align with evidence-based, student-centred approaches to learning, including constructivism.

Resources
In a case-study on managing improvement in e-learning Ellis et al. (2007) describe a model for reviewing the course development and teaching processes of a university including reflection (especially cyclical periods of reflection in which initial outcomes are compared to subsequent outcomes). This encourages understanding and development of the processes and improvements to emerge.

An integrated approach to evaluating e-learning is important for improving quality and effectiveness and verifying design assumptions (Bastiaens et al, 2004). Bastiaens et al. discuss the need for a multi-level simultaneous evaluation approach that incorporates reactions to learning experiences, learning process results, learning performance changes, and organisational results. They comment that a four level evaluation is unnecessary for every event, but recommend that reactions are considered when implementing new learning events (p. 197).

Attwell (2006) shows that there are different approaches to evaluation. We can take an objectivist ‘finding the facts’ approach, or a subjectivist ‘appeal to experience’. Also there are utilitarian approaches attempting to maximize the average good for all, and pluralist/intuitionist approaches with no common index of ‘good’ but rather a plurality of criteria and judges. However, what must be realised is that consistency of the evaluation approach is needed to ensure ongoing comparison of findings.

Attwell provides a comprehensive guide with examples of tools that can be used in the evaluation of e-learning programs. Attwell emphasizes the need for iteration between theory and practice. This means courses must be redesigned in an ongoing manner according to evaluation

Attwell further notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning. There are also surprisingly few return on investment studies. There is concern that e-learning is sometimes not succeeding in the way that had been expected, hence the need for evaluation and refinement. Attwell points out the ongoing need for technical developers and evaluators to engage in dialogue. Attwell identifies five areas of evaluation: individual learner variables, learning environment variables, contextual variables, technology variables, pedagogic variables. There is a need to undertake interpretative and analysis studies rather than merely descriptive ethnographic studies when evaluating e-learning.

Four types of evaluation: context, input, process, product. Attwell (2006) describes a tool for the evaluation of e-learning based on these (derived from Stufflebeam’s CIPP model, http://www.wmich.edu/evalctr/checklists/cippchecklist.htm ).