L8 4 5

=E-learning design and (re)development activities are subject to formal quality assurance reviews at key milestones. =

Evidence
There is an ongoing need to monitor the use of e-learning and ICTs for course delivery because there is as yet no consensus about what constitutes quality e-learning (Usoro & Abid 2008). These authors state that, ‘effective quality strategies, initiatives and tools are very important for convincing lecturers and other stakeholders to adopt e-learning’ (p. 80). Kidney et al. (2007) believe that, ‘a quality online course would be the direct result of a course creation process that included quality assurance strategies’ (p. 18).

Validation of e-learning processes and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

Mansvelt et al. (2009) conclude that ‘institutions that are serious about quality should assess and evaluate professional development needs and desires of staff recognising and reconciling competing agendas of staff’ (p. 245).

Resources
In a case-study on managing improvement in e-learning Ellis et al. (2007) describe a model for reviewing the course development and teaching processes of a university including reflection (especially cyclical periods of reflection in which initial outcomes are compared to subsequent outcomes). This encourages understanding and development of the processes and improvements to emerge.

Ellis et al. (2007) note that any model used for the quality assurance of learning in higher education must have a theoretical base. The research underpinning such a model could align with evidence-based, student-centred approaches to learning, including constructivism.

Ellis et al. (2007) discuss when to undertake evaluations (five stage guide). They also describe a student-focused strategy for evaluation of e-learning. Ellis et al. provide a number of tables and diagrams describing the sort of stages that the evaluation process can act upon and the sorts of data that might be collected. Data can be collected on integration, course websites, users, and support. ‘Dissemination of the implications of the evaluations is necessary if improvement is to occur’ (p. 7).

They further note that any model used for the quality assurance of learning in higher education must have a theoretical base. The research underpinning such a model could align with evidence-based, student-centred approaches to learning, including constructivism.

Seale (2006) acknowledges that ‘auditing e-learning is one of the most difficult actions’. It is a task that can never be faultless and seamless.

Sloan & Walker (2008) discuss the evaluation of accessibility. They suggest a methodology that combines automated and manual evaluation techniques with usability reviews involving disabled evaluators. They also provide a table of key findings of a review that they conducted which exposes issues surrounding the creation of accessible content. Having undertaken their review process they recommend that a publicly available pool of accessibility reviews of authoring tools may support organisations faced with selecting a tool that best fits their needs.