L8 1 4

=Students are provided with timely feedback while engaging in assessed work. =

Evidence
The role of meaningful feedback in online assessment strategies cannot be overemphasized (Gaytan & McEwan 2007). ‘It must be meaningful, timely, and supported by a well-designed rubric when possible’. These researchers support the use of practice self-assessments. This is because immediate and honest feedback can be provided regarding learning and achievement.

Timely, constructive feedback affects students’ participation, performance, and engagement on a course, and learning outcomes (Laurillard, 2002). Feedback can promote positive experiences during online interaction. If instructors provide prompt feedback, participate in discussions, encourage social interaction and employ collaborative learning strategies (McIsaac et al. 1999). Substantive and timely feedback improves online learning participation (Dennen, 2005).

Feedback that learners’ receive from teachers and from other students enables comparison of actual performance with expectations (Mory, 2004).

Optimal feedback looks for balance between student needs and teaching management (Dennen, 2005), and must enhance understanding rather than just indicating correctness (Garrison, 1989).

Feedback involves complex effects including: ‘candlepower’ (Hudson, 2002), which characterises the subtle intimacy that arises in online dialogue and concerns effects of critical dialogue; and ‘feedback specificity’. Although more specific feedback benefits learning responses in those who perform well, it is detrimental to learning responses in those who perform poorly (Goodman and Wood, 2004). So feedback must be tailored to the individual student.

The impact of feedback specificity on learning opportunities is discussed by Goodman and Wood (2004), who report that although specificity can benefit immediate performance, it can also undermine learning related to independent performance. Their findings indicate that the effects of feedback on learning are contextual and conditional. They conclude that ‘those who receive feedback of varying specificity learn different things, through different means. Simple notions about feedback being beneficial or detrimental to learning need to be augmented by more complex models’ that recognise different task aspects.

Resources
Policy should require prompt and useful feedback aimed at improving student capability in related tasks rather than just the immediate goal

Rust (2002) suggests that this feedback should not be in the form or numerical scores or grade letters as this merely encourages students to compete with others in the class or focus on the accumulating grade. What we really want is encouragement for students to focus upon their weaknesses or what has been learned. Also, grades may hide the fact that some learning outcomes have not been achieved at all. Rust argues that an assessment system should explicitly describe each learning outcome as not met / partially met / met / well met. This kind of system can ultimately ensure that all program specifications are eventually met.

Rust (2002) reviews the evidence on assessment and concludes that, ‘we need to ensure, as much as possible, that the workload expected of students is realistic, and that the assessment system is non-threatening and non-anxiety provoking. Some form of continuous assessment is almost certainly more likely to achieve the latter rather than a system in which the assessment ‘that counts’ all comes at the end. Within a continuous assessment system, we need to ensure that there is plenty of formative feedback at regular intervals, and all assessments need to have clear, assessment criteria known by the students before they undertake the work’ (p. 149).

Guidelines for good feedback can be found here: http://www.jiscinfonet.ac.uk/InfoKits/effective-use-of-VLEs/e-assessment/assess-feedback