S1.4.2

=Feedback collected regularly from students regarding the clarity and effectiveness of the technical support provided. =

Evidence
According to Kirschner et al., (2004), there can be significant difference between intentions for support and users perceptions of them. They describe an iterative model for designing for e-learning that attends to six steps, including learner competencies, interactions, and tasks, towards ‘determining how computer support can be best applied’ (p. 31). The model pays close attention to actual and particular learner needs, including: how best to address and support those needs, the learner’s perceptions of the support provided, how the support is actually used, and how effective the support is for actual learning achievement: ‘We might be tempted to say that this is “the proof of the pudding”’ (p. 30).

The need for institutions and teachers to solicit and analyse student feedback that is formative, summative, and based on multiple independent and standard evaluations is well acknowledged (Kirkpatrick, 1997; Forsyth et al., 1999; Arreola, 2000; Sherry, 2003; Thompson and Irlene, 2003; Brennan and Williams, 2004). Student feedback is a reliable and important measure of teaching and learning quality that can be used to inform action for improvements; it is also informative for prospective students (Brennan et al., 2003; Richardson, 2005a, 2005b). However, for feedback to be of use for improving teaching and learning it must be understood and acted upon (Kember et al., 2002), and the implications disseminated (Ellis et al. 2007, p. 7).

Richardson (2005a) identifies some obvious but key issues for obtaining reliable and useful information: “Feedback should be sought at the level at which one is endeavouring to monitor quality…the focus should be on students’ perceptions of key aspects of teaching or on key aspects of the quality of their programmes…feedback should be collected as soon as possible after the relevant educational activity” (p. 409-10).

Hill (2003) has examined quality in higher education (HE) from the perspective of students. Some of the most influential factors in provision of quality HE were found to be the quality of the lecturer and the student support systems. One concern is that e-learning will detrimentally affect the stimulating environment between lecturer and students (Gibbs 2001). Such fears underscore the importance of obtaining regular feedback on quality from students.

The UKeU failed because there was not a demand for it. The focus on e-learning must not be on what technology can do, but perhaps predominantly on what customers want. Therefore, we must continuously obtain student feedback on e-learning initiatives and courses.

http://opq.monash.edu.au/index.html

http://www.qaa.ac.uk/reviews/institutionalAudit/outcomes/OutcomesStudentRep.asp

Performing factor analysis, Selim (2007) questions students and identifies the most critical factors in measuring university support. The availability of computers, printing facilities, internet access and student satisfaction with university support were crucial to e-learning success. Other important categories are the instructor’s attitude towards and control of technology, student technical competency (particularly relevant was students’ previous experience with computers) and the effectiveness of the IT infrastructure (including course management system). These kinds of e-learning adoption critical factors must be carefully evaluated before, during, and after adoption of e-learning.