EMM v2.3 E3

'''E3. Regular reviews of the e-learning aspects of courses are conducted'''

Background
There is an ongoing need to monitor the use of e-learning and ICTs for course delivery because there is as yet no consensus about what constitutes quality e-learning (Usoro & Abid 2008). These authors state that, ‘effective quality strategies, initiatives and tools are very important for convincing lecturers and other stakeholders to adopt e-learning’ (p. 80).

The dependence of e-learning on the use of an appropriate pedagogy and well-designed technology means that when assessing the success of courses and projects it is very important to ensure that the effectiveness of the technology is also formally measured. Evidence of success or limitations in the local context is an important factor in ensuring the efficient design and development of existing and new courses and projects.

Attwell (2006) describes the context that managers of e-learning are operating in: ‘Managers… are having to make decisions about the introduction and use of e-learning when e-learning itself is still in a stage of rapid evolution and instability. Major paradigm shifts are taking place in the pedagogical thinking underpinning e-learning, new ideas and policies are emerging on how e-learning should be developed and financed and there are continuing advances in information and communication technologies. It is in this context that managers are having to make decisions about investing in e-learning and one in which the consequences of making the wrong decisions are increasingly costly’ (p. 40).

Ravitz and Hoadley’s (2005) proposal for a systematic review approach aims for a collaborative and cumulative understanding of e-learning facilities and resources. They argue that the complex e-learning environment calls for stakeholders to continually learn about and share experiences and understandings: ‘analysis of resources must include not just consideration of basic qualities of web design, but also awareness of the structures and processes that provide opportunities for teacher and student learning, and consideration of artifacts of resource use such as examples of student work, project ideas, lesson plans or rubrics’ (p. 959).

Evidence of capability in this process is seen through the use of formal data collection processes that are incorporated into design and development and which allow for regular reporting and analysis of the effectiveness of the technologies used. These processes should be standards based and designed to support comparisons over time and between courses and projects. Policy should require the collection and reporting of this information and the results used to inform ongoing and new development and support resources and strategy. Formal content and materials review plans should be used during the design and development of projects and courses. Policy and guidelines should require these reviews be conducted formally and provide guidance on what aspects require checking An important factor to be conscious of in this area is that the impact of technology on student satisfaction and student learning need to be separately evaluated as they are linked but distinct. Similarly, staff satisfaction may not be related to the effectiveness of the technologies or innovations deployed.

Related Guidelines and Standards
This process is informed by: Quality On the Line: Benchmarks for success in internet-based distance education (Merisotis, J. P., & Phipps, R. A., 2000) course support benchmark set; Canadian Recommended E-learning Guidelines (Barker, K., 2002); Balancing quality and access: Principles of good practice for electronically offered academic degree and certificate programs (Western Cooperative for Educational Telecommunications, 2003).