E3.3.3

=Institutional standards are defined for assessing new e-learning technologies and pedagogies. =

Evidence
Standards and guidelines define the quality of teaching, build consensus about the process of developing a course and help staff learn good practice. They can help to ensure that resources are well used. They can also serve as a check list when evaluating online learning (Milne & White 2005). However, it is noted that quality standards and guidelines must not become more important than outcomes. And also, as reiterated by Meyer (2003), some guidelines are not based on research.

Inglis (2005) notes that guidelines should be developed in consultation with all stakeholders. Guidelines alone may not ensure quality (Meyer 2003), but should be used in conjunction with a number of sources that give evidence of quality within and specific to the organization and its expectations.

Interoperability standards and also resource discovery standards are discussed by Marshall (2004a). Interoperability standards are important so that different systems can talk to each other and share data, e.g. student information. And resource discovery standards are important so that items can be stored and reused.

The dependence of e-learning on the use of an appropriate pedagogy and well-designed technology means that when assessing the success of courses and projects it is very important to ensure that the effectiveness of the technology is also formally measured. Evidence of success or limitations in the local context is an important factor in ensuring the efficient design and development of existing and new courses and projects.

Resources
Milne & White (2005) collect together twenty-three sets of e-learning quality guidelines from an array of geographical regions. Such guidelines, or something like them, should be part of the support offered to staff by their organizations. Staff need guidelines, and examples of good practice.

An integrated approach to evaluating e-learning is important for improving quality and effectiveness and verifying design assumptions (Bastiaens et al, 2004). Bastiaens et al. discuss the need for a multi-level simultaneous evaluation approach that incorporates reactions to learning experiences, learning process results, learning performance changes, and organisational results. They comment that a four level evaluation is unnecessary for every event, but recommend that reactions are considered when implementing new learning events (p. 197).

Quality issues are of concern for Barbera (2004) who identifies six qualitative dimensions for evaluation: The educational scenario; participants’ teaching and learning purposes; instructional agents roles; patterns of interaction; educational instruments; and, knowledge building factors (p. 18). Indicators are ascribed to subdimensions of each dimension that also enable quantitative results to be discerned from the observations.

Standards can also be redefined as an active and ongoing process facilitated by ongoing review. See Ellis et al. (2007) for a model of the inter-relationships between process, review, standards and improvements.

Ellis et al. (2007) discuss when to undertake evaluations (five stage guide). They also describe a student-focused strategy for evaluation of e-learning. Ellis et al. provide a number of tables and diagrams describing the sort of stages that the evaluation process can act upon and the sorts of data that might be collected. Data can be collected on integration, course websites, users, and support.