E3.1.3

=Reviews of student outcomes from courses are conducted regularly. =

Evidence
Ravitz and Hoadley’s (2005) proposal for a systematic review approach aims for a more collaborative and cumulative understanding of e-learning facilities and resources. They argue that the complex e-learning environment calls for stakeholders to continually learn about and share experiences and understandings: ‘analysis of resources must include not just consideration of basic qualities of web design, but also awareness of the structures and processes that provide opportunities for teacher and student learning, and consideration of artifacts of resource use such as examples of student work, project ideas, lesson plans or rubrics’ (p. 959).

Attwell (2006) shows that there are different approaches to evaluation. We can take an objectivist ‘finding the facts’ approach, or a subjectivist ‘appeal to experience’. Also there are utilitarian approaches attempting to maximize the average good for all, and pluralist/intuitionist approaches with no common index of ‘good’ but rather a plurality of criteria and judges. However, what must be realised is that consistency of the evaluation approach is needed to ensure ongoing comparison of findings.

Attwell (2006) provides a comprehensive guide with examples of tools that can be used in the evaluation of e-learning programs. Attwell emphasizes the need for iteration between theory and practice. This means courses must be redesigned in an ongoing manner according to evaluation

Attwell further notes that a large portion of the evaluation literature on e-learning focuses on descriptive rather than analytic or predictive studies. There are surprisingly few robust comparison studies of e-learning compared with traditional learning. There are also surprisingly few return on investment studies. There is concern that e-learning is sometimes not succeeding in the way that had been expected, hence the need for evaluation and refinement. Attwell points out the ongoing need for technical developers and evaluators to engage in dialogue. Attwell identifies five areas of evaluation: individual learner variables, learning environment variables, contextual variables, technology variables, pedagogic variables. There is a need to undertake interpretative and analysis studies rather than merely descriptive ethnographic studies when evaluating e-learning.

Attwell describes the context that managers of e-learning are operating in: ‘Managers… are having to make decisions about the introduction and use of e-learning when e-learning itself is still in a stage of rapid evolution and instability. Major paradigm shifts are taking place in the pedagogical thinking underpinning e-learning, new ideas and policies are emerging on how e-learning should be developed and financed and there are continuing advances in information and communication technologies. It is in this context that managers are having to make decisions about investing in e-learning and one in which the consequences of making the wrong decisions are increasingly costly’ (2006, p. 40).

Resources
Ehlers (2009) examines the role of Web 2.0 in e-learning. E-learning 1.0 follows a broadcasting logic. E-learning 2.0 emphasizes participation. No longer is one LMS used as a material island, but rather a LMS should be seen as a gate leading to the web. The risk of simply placing all the material on a common LMS is that it will become a ‘data grave’ (Kerres 2006). Because most learning takes place in an informal setting (talking to colleagues, or browsing subject related literature) informal learning needs to be embraced by learning institutions. Given this situation, developing quality needs to focus on learning processes and demonstrated student achievements as well as specific learning materials (as in E3.1.1). Elhers identifies the conditions and subjects of quality assessment in web 2.0 e-learning settings. Self-evaluation, reflection and peer-evaluation become more important. Quality review needs to take account of active student participation, learner reflection and support of this process, individually developed learner products, validating social communication processes, and individual e-portfolios rather than just LMS materials.

Ehlers (2009) notes that another method to promote quality of learning materials is through social recommendation mechanisms. This tags which material is thought by users to be especially useful. Ranking of learning material based on learners’ evaluations could track quality.

Four types of evaluation: context, input, process, product. Attwell (2006) describes a tool for the evaluation of e-learning based on these (derived from Stufflebeam’s CIPP model, http://www.wmich.edu/evalctr/checklists/cippchecklist.htm ).