D2.4.1

=Staff use of e-learning procedures and standards during e-learning design and (re)development is regularly monitored. =

Evidence
To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55). The results of monitoring should be used to inform ongoing and new development, and to support resources and strategy. Information on performance can be used as a tool for improving quality, but only if the information is disseminated.

Such validation of e-learning practices and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

Kidney, Cummings & Boehm (2007) describe the quality assurance strategies in place for e-learning courses at the University of Houston-Clear Lake. Courses that made use of their eight QA strategies reported high student satisfaction, those that did not use all the strategies rated on average one or more standard deviations below the mean satisfaction.

Often course design and instruction processes are so intense that those involved lose sight of the big picture. Periodic phases of reflection are necessary to combat this and ensure that linkages to institutional objectives and strategic plans are in place.

To perform quality assurance and staff reviews the university added quality assurance evaluator positions to their instructional technology team. Because some faculty teach with online courses before they have been reviewed and some online courses get developed through informal processes and miss the quality review phase it is desirable to undertake peer review in timely fashion. These authors recommend peer review of course design, content, rigor and androgogy to ensure compliance with the eight quality assurance processes and maintain academic rigor. Peer reviewers could come from within the university or outside. Review of accuracy, completeness, scope, prerequisites, objective, fit within the specific degree and level of rigor can be performed. However, there was faculty resistance to this approach and Kidney et al. explain how this resistance may be defused (table 3, p. 27).

Two forms of ‘Team Review’ are undertaken. First, there is an Instructional Plan Review, which examines the course objective, materials and assessments. Second, there is a Web Design Review, which helps identify problems with the instructional design, functionality or feasibility. These two review milestones have had several additional benefits including better project management, identifying professional development needs, ensuring diversity, and identifying strengths and weaknesses in the quality assurance processes themselves.