E3.1.1

=Reviews of course e-learning materials are conducted regularly. =

Evidence
As part of the need for review and evaluation of the effectiveness of courses and projects it is important to ensure that they meet the needs of the institution and its programmes. Review of the materials regularly ensures that they continue to meet the objectives of the students, the course and the wider programme context as well as ensuring that the online materials referenced are still appropriate and available.

The dependence of e-learning on the use of an appropriate pedagogy and well-designed technology means that when assessing the success of courses and projects it is very important to ensure that the effectiveness of the technology is also formally measured. Evidence of success or limitations in the local context is an important factor in ensuring the efficient design and development of existing and new courses and projects.

In addition to the evaluations of projects and courses (processes E1 and E2), there is a range of other data available through the standard technologies in use, such as LMSs, that can be effectively used to assess the impact a given use of technology is having on students. This data, while limited in some respects, has the advantage of being comparatively easy to collect, empirical in nature and independent of many aspects of opinion and bias that can complicate other evaluations (Bates and Poole, 2003). Similarly, while it can be challenging to do so accurately, costings and comparisons with alternative delivery approaches are essential for effective management of e-learning (Inglis, 2003; Jung, 2003).

Attwell (2006) describes the context that managers of e-learning are operating in: ‘Managers… are having to make decisions about the introduction and use of e-learning when e-learning itself is still in a stage of rapid evolution and instability. Major paradigm shifts are taking place in the pedagogical thinking underpinning e-learning, new ideas and policies are emerging on how e-learning should be developed and financed and there are continuing advances in information and communication technologies. It is in this context that managers are having to make decisions about investing in e-learning and one in which the consequences of making the wrong decisions are increasingly costly’ (p. 40).

Ellis et al. (2007) note that any model used for the quality assurance of learning in higher education must have a theoretical base. The research underpinning such a model could align with evidence-based, student-centred approaches to learning, including constructivism.

Delivery of e-courses is a multi-faceted activity and quality cannot be measured on any one dimension. Quality frameworks are conceptual structures that guide decisions in relation to quality. Judgements of quality are made against a set of criteria, but the criteria must be valid. Ongoing validation is necessary to ensure that criteria of quality are appropriate. Inglis (2008) describes various methods of quality frameworks, provides a tabulated review of the literature on these frameworks and discusses the pros and cons of various validation schemas for the quality frameworks.

e-learning needs to be monitored and evaluated by quality frameworks, but these frameworks must be valid. Inglis describes seven examples of quality frameworks and finds six methods of validation. These are: reviews of the research literature, use of expert panels, empirical research, survey research, pilot projects, and case studies. Inglis concludes that a recognized set of procedures for validation of quality frameworks has not yet emerged. Therefore it seems that ongoing empirical work and management and optimization procedures are necessary when delivering e-learning. ‘In the end, the validity of a particular approach to measuring quality needs to be subject to continual review’ (p. 361).

Resources
Ravitz and Hoadley’s (2005) proposal for a systematic review approach aims for a more collaborative and cumulative understanding of e-learning facilities and resources. They argue that the complex e-learning environment calls for stakeholders to continually learn about and share experiences and understandings: ‘analysis of resources must include not just consideration of basic qualities of web design, but also awareness of the structures and processes that provide opportunities for teacher and student learning, and consideration of artifacts of resource use such as examples of student work, project ideas, lesson plans or rubrics’ (p. 959).

An integrated approach to evaluating e-learning is important for improving quality and effectiveness and verifying design assumptions (Bastiaens et al, 2004). Bastiaens et al. discuss the need for a multi-level simultaneous evaluation approach that incorporates reactions to learning experiences, learning process results, learning performance changes, and organisational results. They comment that a four level evaluation is unnecessary for every event, but recommend that reactions are considered when implementing new learning events (p. 197).

Attwell (2006) shows that there are different approaches to evaluation. We can take an objectivist ‘finding the facts’ approach, or a subjectivist ‘appeal to experience’. Also there are utilitarian approaches attempting to maximize the average good for all, and pluralist/intuitionist approaches with no common index of ‘good’ but rather a plurality of criteria and judges. However, what must be realised is that consistency of the evaluation approach is needed to ensure ongoing comparison of findings.

Attwell provides a comprehensive guide with examples of tools that can be used in the evaluation of e-learning programs. Attwell emphasizes the need for iteration between theory and practice. This means courses must be redesigned in an ongoing manner according to evaluation

Attwell further notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning. There are also surprisingly few return on investment studies. There is concern that e-learning is sometimes not succeeding in the way that had been expected, hence the need for evaluation and refinement. Attwell points out the ongoing need for technical developers and evaluators to engage in dialogue. Attwell identifies five areas of evaluation: individual learner variables, learning environment variables, contextual variables, technology variables, pedagogic variables. There is a need to undertake interpretative and analysis studies rather than merely descriptive ethnographic studies when evaluating e-learning.

Four types of evaluation: context, input, process, product. Attwell (2006) describes a tool for the evaluation of e-learning based on these (derived from Stufflebeam’s CIPP model, http://www.wmich.edu/evalctr/checklists/cippchecklist.htm ).

http://www.educause.edu/EDUCAUSE+Quarterly/EDUCAUSEQuarterlyMagazineVolum/EstablishingaQualityReviewforO/157414

http://net.educause.edu/ir/library/pdf/EQM0635.pdf