L5 5 1

=Information on feedback type and quality, and student satisfaction with feedback, guides training and support resourcing. =

Evidence
There is an ongoing need to monitor the use of e-learning and ICTs for course delivery because there is as yet no consensus about what constitutes quality e-learning (Usoro & Abid 2008). These authors state that, ‘effective quality strategies, initiatives and tools are very important for convincing lecturers and other stakeholders to adopt e-learning’ (p. 80). Kidney et al. (2007) believe that, ‘a quality online course would be the direct result of a course creation process that included quality assurance strategies’ (p. 18).

To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55).

Validation of e-learning processes and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

Quality dimensions need to be validated and refined by primary research. For example, it is not yet known whether e-learning saves time or takes time off the lecturer. This needs to be established by assessment and feedback.

This is important, as an example web 2.0 natives will be limited and frustrated when confronted with web 1.0 technologies (Lankshear and Knobel 2005).

Kidney et al. (2007) believe that, ‘a quality online course would be the direct result of a course creation process that included quality assurance strategies’ (p. 18). This is exactly what is described by Abdous (2009) in a pilot programme.

Also, the effects of tech cannot necessarily be predicted without examining context, so e.g. handheld devices may have interesting or bizarre knock on effects (Song 2007). Sometimes communication channels used may diminish face-to-face interactions (Song p. 40).

Some students may feel that staff communication is unprofessional in some formats.

There is always a risk of complicated design at the expense of other interaction possibilities (Song 2007).

Resources
The questionnaire on teacher interaction (QTI) has been translated into and validated in more than a dozen languages. The QTI can be used as a feedback instrument for guiding improvements in their classroom relationships (Fraser & Walberg 2006).

Mazza & Botturi (2007) explain how course redesign and resourcing can be facilitated by monitoring the use of communication channels. We can monitor what was actually used by students, what was poorly used, is this because it was less integrated, or perhaps because it was pointless? Monitoring, e.g. through the use of software integrated with learning management systems, can identify resources that are only used at specific times, which could then perhaps be hidden to simplify the interface.

Mazza & Botturi (2007) explain how the use of monitoring software can help to answer questions such as ‘are students participating in discussions?’ They describe the GISMO software which allows teachers to identify students with low numbers of logins and a poor participation record. Additional assistance and encouragement could then be targeted.

Course delivery plans should include plans to assess the content of communications to ensure that the outcomes intended from particular channels are being realized. If content strays from what is intended then modelling and reinforcing effective communication can be undertaken.