L6 4 1

=Students’ abilities to conduct effective research are regularly monitored. =

Evidence
To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55). The results of monitoring should be used to inform ongoing and new development, and to support resources and strategy. Information on performance can be used as a tool for improving quality, but only if the information is disseminated. Such validation of e-learning practices and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

There is an ongoing need to monitor the use of e-learning and ICTs for course delivery because there is as yet no consensus about what constitutes quality e-learning (Usoro & Abid 2008). These authors state that, ‘effective quality strategies, initiatives and tools are very important for convincing lecturers and other stakeholders to adopt e-learning’ (p. 80). Kidney et al. (2007) believe that, ‘a quality online course would be the direct result of a course creation process that included quality assurance strategies’ (p. 18).

One concern surrounding the ‘massification’ of higher education is the lowering of entry standards and an input-output approach to students-graduates. Wilson (2007) argues that this has the potential to weaken quality education. This means that there is a great responsibility on increasing the quality and quantity of student support systems. However, these must be targeted where they are actually needed. Hence, we must monitor the abilities of students, what they are actually doing, as learning progresses (see Usoro & Abid 2008).

Resources
Dunn (2002) argues from a multi-year assessment of information literacy skills at California State University that it is important to actually assess information skills. A process must be established for assessing student information and research competence. This data can be used to target information competence instruction. Assessment, however, must be real-world in context.

Emmett & Emde (2007) describe how they have used an assessment tool based on the ACRL’s ‘information literacy competency standards for higher education’ tool. These researchers provide methods for assessing information literacy skills by focusing on the development of an assessment tool based on learning outcomes. The append their tool in full to the article.

Mittermeyer & Quirion (2003) use a 20 question quiz to study information research skills of first year undergraduates in Quebec. They conclude, ‘Despite the limited number of variables in this study, the results indicate that a significant number of students have limited knowledge, or no knowledge, of basic elements characterizing the information research process’ (p. 7). And recommend, (1) regular evaluation of the information research literacy of first-year undergraduate students upon entrance to university; (2) participation of a library representative in the various program committees; (3) successful completion of a test to measure information literacy competencies during students’ first year of studies; (4) incorporation of information literacy instruction into academic programs at the undergraduate and graduate levels.