L8 1 3

=Students are provided with opportunities to practice assessment tasks before attempting marked work. =

Evidence
Lauillard (2002) notes that one of the greatest dissatisfactions with students’ performance is that students did not understand what was required of them (2002, p. 218). This must be explicitly negotiated with students.

Part of the feedback process is enabling teachers to understand the effectiveness of their teaching. Teachers can devise activities and questions that provide feedback to them about the effectiveness of their teaching, particularly so they know what to do next. Assessments can perform all these functions (Hattie & Timperley, 2007 p. 102).

Furthermore, if assessment requires the use of technology then support enabling students to gain the required technical skills must be in place. ‘use of software as soon as possible… availability of online mentors’ (Gaytan & McEwan 2007).

In a study of engineering students Kemppainen & Hein (2008) demonstrate that those students who rated their initial knowledge the lowest completed online self-assessments the greatest number of times and experienced corresponding grade increases on the final exam. Students rated the self-assessment tool positively. Other literature also supports the use of practice self-assessments (see Gaytan & McEwan 2007). This is because immediate and honest feedback can be provided regarding learning and achievement.

Resources
Ashton (2008 http://www.elearning.ac.uk/features/eassessment/view ) Discusses e-assessment from a practitioner’s perspective. She explains that it is often possible to investigate patchy performance on assessment tasks to see if the reason is that assessment questions were not valid or are poorly aligned to expressed learning objectives. Formative assessment can be used to identify vulnerable students before work that counts towards a course grade is attempted. Also, sample assessments may be provided online for students to attempt over and over if the wish in order to understand where their deficiencies lie.

Rust (2002) says that assessment systems must be unthreatening, fair and the assessment process and criteria should be explicit and transparent to students (even though research suggests that merely having explicit criteria does not help students to produce better work). What is needed on the other hand is to get students to better engage with the assessment criteria. This could be achieved by discussing the criteria in class, but even better would be to involve the students in a marking exercise. This sort of practice can improve performance (Rust et al. 2003).

Technical competence can be assessed prior to students having to use technology in actual course assessments (see L3). Students must have the chance to practice with technologies before they are used in assessments. However, the link between exercises with novel technologies and their ultimate use may not be obvious and students may be discouraged if they don’t see the point of the exercise. It may be that ‘live’, organic, participation scenarios are more useful in some instances.