L6 4 4

=Students’ abilities to access digital content are regularly monitored. =

Evidence
To improve e-learning outcomes it is important to learn from past mistakes, according to Ehrmann (2002), who argues that tracking progress is not only necessary to stay on course but also to identify solvable problems that can attract fresh resources (p. 55). The results of monitoring should be used to inform ongoing and new development, and to support resources and strategy. Information on performance can be used as a tool for improving quality, but only if the information is disseminated. Such validation of e-learning practices and resources is a significant stage in the full cycle of organisational learning that describes success in terms of ‘student performance, student satisfaction, staff experience, and cost effectiveness, as judged in relation to the original intentions’ (Salmon, 2000, p. 236). Salmon discusses validating as one of six activities in the iterative process of creating an effective learning organisation infrastructure that enables ‘the system to learn about itself’ (p. 237).

Dunn (2002) argues from a multi-year assessment of information literacy skills at California State University that it is important to actually assess information skills. A process must be established for assessing student information and research competence. This data can be used to target information competence instruction. Assessment, however, must be real-world in context.

Resources
Emmett & Emde (2007) describe how they have used an assessment tool based on the ACRL’s ‘information literacy competency standards for higher education’ tool. These researchers provide methods for assessing information literacy skills by focusing on the development of an assessment tool based on learning outcomes. The append their tool in full to the article.

Mittermeyer & Quirion (2003) use a 20 question quiz to study information research skills of first year undergraduates in Quebec. They conclude, ‘Despite the limited number of variables in this study, the results indicate that a significant number of students have limited knowledge, or no knowledge, of basic elements characterizing the information research process’ (p. 7). And recommend, (1) regular evaluation of the information research literacy of first-year undergraduate students upon entrance to university; (2) participation of a library representative in the various program committees; (3) successful completion of a test to measure information literacy competencies during students’ first year of studies; (4) incorporation of information literacy instruction into academic programs at the undergraduate and graduate levels.

Juan et al. (2009) describe the SAMOS information system for monitoring students’ and groups’ activities in an e-learning context. They have attempted to ensure that students reach a satisfactory level of involvement in the learning process and to avoid high drop-out rates caused by lack of adequate support and guidance in e-learning. Non-participating students and groups can be identified by these methods and timely intervention initiated.

Snavely (2001) suggests that in addition to assessing all students’ basic information literacy skills, faculty and librarians should also work together to develop assessment instruments and strategies in the context of particular disciplines, as information literacy manifests itself in the specific understanding of the knowledge creation, scholarly activity, and publication processes found in those disciplines. However, Dunn (2002) has found that breadth and depth scores for information literacy predict performance across a range of information assessment tasks. If you change the context or scenario the same information literate students perform well.