E3.3.1

=Institutional standards are defined for the regular review of the e-learning aspects of courses. =

Evidence
The dependence of e-learning on the use of an appropriate pedagogy and well-designed technology means that when assessing the success of courses and projects it is very important to ensure that the effectiveness of the technology is also formally measured. Evidence of success or limitations in the local context is an important factor in ensuring the efficient design and development of existing and new courses and projects.

An integrated approach to evaluating e-learning is important for improving quality and effectiveness and verifying design assumptions (Bastiaens et al, 2004). Bastiaens et al. discuss the need for a multi-level simultaneous evaluation approach that incorporates reactions to learning experiences, learning process results, learning performance changes, and organisational results. They comment that a four level evaluation is unnecessary for every event, but recommend that reactions are considered when implementing new learning events (p. 197).

Resources
In a case-study on managing improvement in e-learning Ellis et al. (2007) describe a model for reviewing the course development and teaching processes of a university including reflection (especially cyclical periods of reflection in which initial outcomes are compared to subsequent outcomes). This encourages understanding and development of the processes and improvements to emerge.

Quality issues are of concern for Barbera (2004) who identifies six qualitative dimensions for evaluation: The educational scenario; participants’ teaching and learning purposes; instructional agents roles; patterns of interaction; educational instruments; and, knowledge building factors (p. 18). Indicators are ascribed to subdimensions of each dimension that also enable quantitative results to be discerned from the observations.

Ellis et al. (2007) discuss when to undertake evaluations (five stage guide). They also describe a student-focused strategy for evaluation of e-learning. Ellis et al. provide a number of tables and diagrams describing the sort of stages that the evaluation process can act upon and the sorts of data that might be collected. Data can be collected on integration, course websites, users, and support

http://net.educause.edu/ir/library/pdf/EQM0635.pdf

Delivery of e-courses is a multi-faceted activity and quality cannot be measured on any one dimension. Quality frameworks are conceptual structures that guide decisions in relation to quality. Judgements of quality are made against a set of criteria, but the criteria must be valid. Ongoing validation is necessary to ensure that criteria of quality are appropriate. Inglis (2008) describes various methods of quality frameworks, provides a tabulated review of the literature on these frameworks and discusses the pros and cons of various validation schemas for the quality frameworks.

e-learning needs to be monitored and evaluated by quality frameworks, but these frameworks must be valid. Inglis (2008) describes seven examples of quality frameworks and finds six methods of validation. These are: reviews of the research literature, use of expert panels, empirical research, survey research, pilot projects, and case studies. Inglis concludes that a recognized set of procedures for validation of quality frameworks has not yet emerged. Therefore it seems that ongoing empirical work and management and optimization procedures are necessary when delivering e-learning. ‘In the end, the validity of a particular approach to measuring quality needs to be subject to continual review’ (p. 361).