ELearning/Teaching online/Feedback: Activities, assignments, and assessments

From Encyclopedia of Science and Technology
Jump to: navigation, search

Feedback is a primary teaching tool. It is an evaluation of one’s performance and provides information to improve performance.

Introduction

From a system's perspective (Figure 1 below), feedback is information about an initial output, which is looped back into the system to modify future outputs. An example is a car navigation system. Using the illustration below, we enter a destination into the system (initial input); it provides initial instructions (output) to drive forward. The system monitors the car’s location (output information) and approximates that location on a visual map (feedback). The system continues to provide instructions at key junctures along the route (feedforward). Each iteration of the loop takes us closer to our destination, and the cycle terminates when we have arrived.

FeedbackLoop.png
1. System feedback loop


Within education, feedback has traditionally been defined as information about the correctness of the output – right or wrong (Dick, Carey & Carey, 2005). This feedback provides some information, but not a lot. When we apply the systems perspective to instruction and student output, we see that feedback can be so much more. Repeating the feedback loop above, we begin with instruction (initial input), and the student completes an assignment (output). A grader, human or machine, evaluates the output (monitor) and arrives at conclusions about correctness and details such as those things done well and those below par (output information). This information is provided to the student (feedback) and guidance for correcting or improving their work for the next iteration (feedforward). There is another dimension to humans that complicates this neat picture – human actions and reactions always contain an element of emotion. Keep this in mind.

Theoretically, the cycle could continue until the work is perfect, but that would be impractical and of diminishing value. More practical would be two or three loops through the cycle, which can be accomplished using different approaches that we will discuss. In this way, feedback becomes an integrated element of teaching.

Feedback purpose

As implied above, feedback can serve a number of purposes. Let’s briefly state them explicitly (Sargeant, 2009).

  • Record of performance. Feedback serves as documentation of student performance.
  • Justification. Feedback documentation serves as the basis for grading, thus justifying a grade should the need arise.
  • Instruction. Feedback provides additional instruction (scaffolding) to help students understand and apply what they have learned.
  • Reinforcement. Feedback informs students of what they are doing correctly, thus encouraging them to continue with these practices.
  • Improvement. Feedback serves as a regulatory, refinement, or boosting mechanism, bringing performance closer to the ideal.


In the real world, feedback CAN serve all these purposes, but rarely does. The reasons are many, including the time and effort it takes to create feedback that WILL serve them all. Some strategies to make that load more manageable and effective are presented later.

Instructor and student perspectives

Not surprisingly, instructors and students have different perspectives when it comes to feedback. When asked, instructors generally feel they do a good job of providing substantive, useful, and fair feedback while students have mixed feelings (Bailey & Garner, 2010; Carless, 2006; Weaver, 2006).

In a large sample of 460 instructors and 1,740 students, Carless (2006) found the following range of reactions to the statement, “Students were given detailed feedback that helped them improve their next assignments.”

A typical student comment goes, “I seldom know how to improve my next assessment because of the lack of comments or advice.”

A typical instructor comment, “Students are interested in their grades only. They are not interested in getting feedback on how to improve their learning.”

There is probably truth in both statements, thus summarizing the dilemma of feedback. Students say they want but don’t get useful feedback, while instructors say they give useful feedback but students ignore it. The next sections describe how we can narrow the gap between the two in perception and action.

Student use of feedback

We stated earlier that many instructors perceive students as uninterested in feedback, and research tells us this is at least partially true. Rae & Cochrane (2008) identified a continuum of student engagement with feedback, ranging from passive to active, with the former possessing “a distinct lack of intent to learn.” Jones (2012) experienced a 30% nonparticipation and partial-participation rate among her student subjects. In a survey by Weaver (2006), 20% admitted to failure to act on feedback. There is more to this picture, however, in that we need to understand why some students fail to attend to feedback, and why a larger number fail to use feedback. Here are some of the reasons.

Emotional factors

Emotion was mentioned as an ever-present aspect of human response to feedback. Feelings can close students off from accepting feedback in at least three ways. First, student interest in the subject (Wingate, 2010) impacts their willingness to engage. Students who perceive subjects negatively are simply not motivated to learn and can be very difficult to reach (Weaver, 2006). Second, self-esteem issues where the student perceives himself as inadequate and incapable of learning from feedback (Weaver, 2006) resulting in debilitating anxiety.

Finally, many students react viscerally to the phrasing and content of the feedback, taking offense, feeling hurt, feeling as though the instructor doesn’t like them, or simply feeling perplexed by the breach between their and the instructor’s understanding of their work (Pyke & Sherlock, 2010; Sargeant, 2009). Weaver (2006) found differences in style and tone between feedback provided to higher achieving and lower achieving students, with the former receiving more hedging devices such as “tend to”, “occasional”, “might have”, and “would have.” Lower achieving students received more imperatives such as “use . . .”, “see . . .”, “you must”, and “you need to.” Additionally, higher achieving students received considerably more positive comments. Sargeant (2009) identified a typical pattern for students when they receive negative feedback:

799px)
2. Emotional responses to feedback and their results (Sargeant, et al., 2009)


Cognitive factors

Cognitive factors also loom large in student failure to act on feedback, and lie within students, instructors, and designers. Similar to interest, students also evaluate the personal utility of the course. Those who perceive little utility, or take a more superficial approach to learning, as opposed to deep learning, will be less interested in the learning value of feedback (Light, 2011). Such students are most interested in acquiring a grade and moving on, without any intention to retain course content.

Instructor related issues revolve around student understanding of instructor communications. First, there is the question of whether students understand the assignment or not. Jones (2012) asserts that feedback needs to be part of a process that begins with student understanding of what constitutes a good assignment. In their sample of college students, Hartley & Skelton (2002) found that just one-third indicated they understood the criteria by which they would be graded.

FeedbackPerceptions.png
3. Student and instructor perceptions


More central to instructor feedback is whether students understand and can act on the feedback messages. Problems in understanding include feedback that is too general (“inappropriate style,” “lacks coherence”), lacks guidance (“clumsy expression”), unrelated to the assignment criteria (something new not included in the original assignment), or is simply not correctly comprehended by students (message complexity, abstractness, vocabulary). Bailey & Garner (2010) termed this dissonance as a “sense of estrangement” from the language of feedback.

Finally, the modular design of most college courses is “end-loaded” with assignments and assessments, thus discouraging students from perceiving feedback as useful (Rae & Cochrane, 2008). They see that the module is “over” and do not regard the process as developmental from module to module. When modularization results in more summative assessment, it can be counterproductive to the development of integrated understanding.

Elements of high quality feedback

Feedback quality cannot be judged until we have determined its purpose. Here we will assume that all purposes (record, justification, instruction, reinforcement, improvement) are desired.

Three factors

Lizzio & Wilson (2008) identified three factors students perceive as elements of quality feedback. Their sample included college students ranging from freshmen to graduate level. As may be expected, they found that experienced students are more discerning of the feedback they receive.

Factor 1: Developmental (scaffolding function, allowing students to develop beyond their current level of performance)

  • Focuses on areas I can improve
  • Informs me why my work is inadequate
  • Shows me how to critically assess my own work; encourages self-reflection
  • Provides instruction on how to improve
  • Feedback can be generalized beyond the current assignment


Factor 2: Encouraging (supportive, enhances learner motivation by focusing on learning and effort rather than performance)

4. "What do you consider a reasonable length of time to receive feedback?" (O'Connell et al., 2009)
  • Acknowledging quality
  • Identifying correct responses
  • Recognizing effort invested
  • Considerate criticism
  • Giving hope


Factor 3: Fairness/Justice (objective, unbiased, sufficient attention paid)

  • Clarity of feedback
  • Consistency of feedback
  • Depth of analysis is evident
  • Feedback can be acted on
  • Feedback is timely (Figure 4)
  • Opportunity to respond to feedback

Feedback composition

We’ve established that emotional and cognitive barriers can arise when students attend to the feedback they receive. Emotional barriers can result from message tone, while cognitive barriers arise from message complexity. Minimizing barriers means paying attention to tone and complexity.

Message tone

Message tone sends “meta-messages,” intentionally or not, about how we feel toward the other person. In other words, our message may convey sentiment that we neither perceive nor intend, but which carry “feeling-tone” nonetheless. Of course, the issue isn’t that simple because we also understand that those with low self-esteem tend to view all feedback as a judgment of character, while those with high self-esteem do not (Weaver, 2006). Consider the tone of these messages:

  • Judgmental (“Poor effort.”) versus non-judgmental (“There are some issues we need to address.”)
  • Unequivocal (“Your conclusion does not follow from the facts.”) versus equivocal (“Your conclusion doesn’t seem to logically follow your facts.”)
  • Imperative (“Use compare and contrast here.”) versus possibility (“Using compare and contrast should be useful here.”)
  • Abrupt (“NO! Wrong statistical test.”) versus respectful (“Please take a look at the statistical test you used. I don’t believe it fits the experimental design.”)


Message complexity

Message complexity arises from structural complexity and information density (Fox, et. al., 2007), both of which increase cognitive load and result in poor message encoding on the part of the learner. Structural complexity includes specialized vocabulary, complex sentence structure, long sentences, and high levels of abstraction. Information density is simply the amount of information presented at the same time. Consider the fine print and pressured speech of auto sales ads. These messages are intended to obscure information by packing it into an unmanageable space, opposite the intended result when we present feedback. Significantly, Fox et. al. present research demonstrating that when message complexity overloads our ability to comprehend, the brain automatically moves attention to the simpler, more manageable, aspects of the message and effectively screens out the complex. Common sense lessons embedded here include (with due consideration of your student audience):

  • Use short sentences and short paragraphs.
  • Communicate concretely (actionable) as possible.
  • Limit the issues you address.
  • Use simple vocabulary as much as possible. Alternatively, when you use erudite terminology, define it unless you’re sure the student understands.


Another aspect of feedback complexity is the comprehensiveness of the information provided. Some students are capable of working from sparse comments and clues, whereas others need more in-depth explanations before they can process and act on feedback. This means knowing your students and determining if an individual session would be more appropriate.

An alternative approach, called progressive disclosure, used in customer service, software and website design may also be appropriate. Progressive disclosure in the customer service arena (Bell & Zemke, 2007) tells us that various customers will need differing levels of information in response to their inquiries. There is a risk of offending an informed customer with too much information. That and a lot of wasted time and effort. Therefore, the overall most efficient approach is to provide the least amount of information perceived as sufficient for the customer, assuming that the customer will ask further if the answer was lacking. The motivation for this approach is to maintain the focus of the customer’s attention by reducing clutter, confusion, and cognitive overload. This basic notion is also useful for feedback, except that we cannot count on students to ask for additional information on their own.

These desirable characteristics of feedback are consistent with the advice offered by a number of authors writing for classroom and online instructors (Dick, Carey & Carey, 2005, Duffy & Kirkley, 2004; Henderson & Nash, 2007; Ko & Rosen, 2004; Smith & Ragen, 2004).

Summary so far

Before we move into solutions, let’s recap the major issues uncovered in the previous sections.

  • Feedback can have multiple purposes including a record of performance, justification for a grade, additional instruction (scaffolding), reinforcement of current practices, and improvement of current work. Expectations often differ between instructors and students on what feedback should be.
  • Students frequently begin assignments without sufficiently understanding expectations.
  • Instructors and students often have very different views on feedback, with students saying they don’t receive enough and instructors saying students aren’t interested.
  • There are three qualities that students desire in the feedback they receive: developmental, encouraging, and fair.
  • Feedback should encourage a dialog between instructor and students.
  • Some students fail to attend to feedback.
  • A greater number fail to utilize feedback for emotional and cognitive reasons.

Making feedback more useful and used more

Six authors from the literature have proposed various methods for incorporating the concerns regarding feedback. These methods include:

  • Preparatory guidance for assignments and assessments
  • Use of rubrics for students and instructors
  • Dialogue between instructor and grader to clarify and extend feedback
  • Student reflection and self-evaluation
  • Peer evaluation and feedback
  • Formative, or multi-stage, assignments and assessments


Preparatory guidance

Beaumont, et. al. (2011) identified four elements of preparatory guidance, including an explanation of the grading criteria, discussion of the assignment, presentation of model answers and/or exemplary samples, and asking students to commit to achieving a particular grade. Carless (2011) provided a set of open-ended questions at the beginning of a course to encourage self-reflection. Utilizing peer feedback, Cartney (2010) prepared students by conducting a session explaining the process, answering student questions, discussing the value of peer feedback, and how to use the provided rubric. Students were encouraged to use the rubric throughout the multi-stage assignment.

Use of rubrics

A rubric is a scoring device that lays out the specific expectations for an assignment. Rubrics divide an assignment into its component dimensions and provide a detailed description of what constitutes acceptable and unacceptable levels of performance for each dimension (Stevens & Levi, 2005). Johnsson & Svingby (2007) point out that rubrics make expectations and criteria explicit, facilitate self-assessment, feedback, and grading. Cartney (2010) and Jones (2012) provided a grading rubric for their students.

See Scoring and rubrics for complete coverage of rubrics and alternative approaches to communicating assignment and assessment standards.

Digital recording as feedback

Audio and video digital recording enhances student awareness of their action and its impact on others when acquiring new knowledge and skill because it activates cognitive and emotional learning (Strand et al., 2017). "The digital recorder gives students direct and immediate feedback on their performance of the various practical procedures, and may aid in the transition from theory to practice." As such, the recordings serve as a powerful tool for self-reflection and assessment. Audio recordings can be especially useful in language learning, while video recordings are most useful for tasks involving physical movement and audio/video for interpersonal interactions.

Dialogue

Beaumont, et. al. (2011), Sargeant, et. al. (2009), and Rae & Cochrane (2008) advocate for feedback that includes opportunities for interaction between instructor and student, centered on sharing interpretations, clarifying meaning and expectations. Jones (2012) agrees, but stresses the difficulty of this for large classes.

Student reflection and self-evaluation

Sargeant (2009) and Carless (2011) are strong believers in the power of individual reflection and self-evaluation for learning. “Reflection allows elements of the experience to be revisited, analyzed, and integrated into one’s existing base of knowledge and understanding, and offers the opportunity to test one’s assumptions and biases” (Sargeant, 2009). Carless (2011) embedded reflection into the course introduction, and again within assignments. Cartney (2010) required students to comment on the feedback they received as the first section of the final assignment.

All emphasize the goal of student self-regulation in which the onus of responsibility moves from instructor to student. Self-regulation is defined as “an active, constructive process whereby learners set goals for their learning and then attempt to monitor, regulate, and control their cognition” (Pintrich & Zusho, 2002). This sentiment is reflected in the comments of one instructor: “Feedback can become spoon-feeding, so only give feedback when it’s necessary, and otherwise use strategies for students to find the answers themselves. I think the most effective characteristic is to let students know how to find out what they want, rather than providing them the answer directly. But it’s easier said than done.”

Peer evaluation and feedback

Jones (2012) utilized peer evaluation and feedback in her experimental design, forming groups of three, with goals of providing multiple perspectives, developing critical thinking skills, and self-monitoring skills. Full participation in the three-part process resulted in significantly higher scores. Prof. Mitchell Duneier of Princeton (in Lewin, 2012) compared peer grading of midterm exams with instructor scoring in a MOOC of over 2,000 students. Each exam was scored by five others and averaged to arrive at a score. He found a 0.88 correlation between peers and instructors. He also found that peer graders gave more accurate scores on good exams than bad ones, and the lower the score, the more variance among graders. Cho & MacArthur (2010) compared the impact of peer and expert feedback on improvements in student submissions. Their findings are especially fruitful:

Comparing expert, single peer, and multiple peer feedback together with types of feedback (directive, non-directive, criticism, praise, and summary), they found significant positive differences when multiple peers provided non-directive feedback, leading students to make more micro-level (sentence and paragraph) revisions, leading to higher final grades. When students attempted macro-level changes such as adding new content or reorganizing, they tended to be unsuccessful. In agreement with advocates of dialog between instructor and student, Beach & Friedrich (2006) suggest that feedback on these larger issues is best provided in a synchronous setting (face-to-face or live chat) where more interaction is permitted.

FeedbackChain.png
5. The power of peer feedback


Their explanation, supported by previous research, is that peers provide feedback in language that students better understand and can act on, multiple peers together provide feedback that is quantitatively and qualitatively richer, which leads to significant micro-level changes that lead to better grades.

Regarding the quality of peer feedback, Cho & MacArthur acknowledge that peers may be unable to provide the level of feedback that experts can, but have demonstrated that feedback from multiple peers is highly reliable and moderately valid compared with individual experts. Their study found a .70 (p<.05) correlation between expert and peer ratings. Additionally, the “curse of expertise” often results in feedback that students are unable to act on – resulting in a preponderance of simple revisions (mechanics and minor vocabulary) by students. Note how this is consistent with Fox, et. al. (2007), who found overloaded receivers of complicated messages focusing on the simplest parts while blocking the complex.

Cartney (2010) explored the psychodynamics of students giving and receiving feedback with their peers, finding the majority of participants expressing anxiety about the process. Worries include anxiety of being exposed to peers, being unqualified to give feedback, the potential negative reactions from those they review, and a fear of being too critical. These findings reflect previous research. When students were asked to consider the benefits of peer review, they produced comments such as, “When I write I get too close to my essays so I am not reading them anymore … I can’t see the mistakes. Someone else reading them for me is invaluable,” and “I might have initial fears about other people seeing my work but actually that is what happens in real life.” Having identified the benefits, students were able to focus more on this and less on their anxieties.

Formative multi-stage assignments and assessments

Here we present, in brief, the multi-stage processes used by several authors and their results.

Setting: Department of Nursing portfolio assessment (Carless, 2011)

  1. Initial portfolio submission.
  2. Initial instructor grading and feedback.
  3. Student revisions and resubmission for second grade (no feedback).
  4. No results reported.


Setting: Social Policy for Social Workers course (Cartney, 2010)

  1. Essay assigned.
  2. Preparation session explaining the process, value of peer feedback, use of marking sheet.
  3. Home groups of five (previously formed), with each member giving and receiving feedback via e-mail to/from all others, with copy to the instructor.
  4. Instructor reviewed student feedback and provided general feedback to the groups on the quality of comments. The instructor also contacted at risk students, as requested by the students themselves.
  5. Students individually reworked their essays and submitted them for a final grade. Comments on how they used their formative feedback were required as the first section.
  6. No results reported, as the focus was on student dynamics.


Setting: Undergraduate Psychology course (Cho & MacArthur, 2010)

  1. Writing assignment, review of evaluation rubric and grading process (students were graded on their reviews as well as their assignment).
  2. Assignment draft submitted to a system using a pseudonym.
  3. Each student reviewed six draft papers using the rubric to make comments and assign scores (comments first, scores second). The instructor reviewed all papers.
  4. Assignment grades given using a weighted average. Reviewer grades given using consistency measures.
  5. Student authors reviewed feedback, revised their assignment, and submitted for grading.
  6. The same reviewers that graded the first draft also graded the final submission.
  7. Final grade based on 50% for the writing and 50% for the reviewing of both drafts.
  8. Multiple peer-reviewed final drafts were graded significantly higher than single peer or expert reviewed final drafts (p<.03).


Setting: First year biosciences students (Jones, 2012)

  1. Short practical experiment and report assigned.
  2. Lecture on how to write the report.
  3. Instructor review and feedback.
  4. Second experiment and report, plus a form with three questions asking for most improved areas, self-assessment of score, and desired further feedback.
  5. Instructor review and feedback.
  6. Third experiment and report.
  7. Peer review and feedback in groups of three.
  8. Individual revision and submission for instructor grading.
  9. Students who participated in all three stages (70%) scored significantly higher (p < .001) than those participating in one or two stages.


Setting: Applied linguistics course for first-year students (Wingate, 2010)

  1. Exploratory essay (1500 words) assigned and submitted.
  2. Instructor provided written feedback.
  3. Assignment 1 (3000 words) submitted.
  4. Instructor graded and provided written feedback.
  5. Assignment 2 (4000 words) assigned with explicit encouragement to use feedback from the two previous papers.
  6. Instructor graded, with 48% of students improving their score by at least 10% (one grade level) from assignment 1 to assignment 2, and 27% improving by 5%-9%.

Learning management system strategies

Assignments: form groups of 3-5, use multi-stage or related (easiest to most difficult) assignments with peer review and feedback.

Self-Assessments: Include correct answers and brief explanation with textbook or article references.

Graded Assessments: Include % correct, brief explanations, and textbook/article references.

Conclusion

Especially in the online setting, feedback is a primary teaching tool for instructors. With initial teaching provided by the course (videos, video lectures, web pages, articles, etc.), the instructor’s time is better spent clarifying, supporting, and extending this base as opposed to repeating it.


#top

Up to Teaching Online

⇑ ⇑ Up to Home