Customise Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorised as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

Performance cookies are used to understand and analyse the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Advertisement cookies are used to provide visitors with customised advertisements based on the pages you visited previously and to analyse the effectiveness of the ad campaigns.

Ron Mourad, Religious Studies

I have been teaching Intro to Christian Thought for 17 years, and in that time, I’ve developed grading habits that I thought were working pretty well. I usually give fairly detailed notes on paper drafts and final papers, but pretty limited (and almost entirely critical) feedback on essay exams. I want students to persist in their studies, however, and, according to this week’s ACUE course module, giving more specific and more positive feedback helps them do that.

I therefore decided to accentuate the positive, point out how both God and the devil are in the details, and test the truth of several other clichés while grading a recent set of blue book exams.

The challenges of grading this way are clear enough. It took me almost twice as long to grade these exams as it usually does. The truth is, I’ve developed my teaching habits from a series of compromises over the years between what’s ideal and what’s possible. It will be difficult for me to find the time to grade essay exams this way every time.

The successes are also clear. I asked students to review their graded exams and write anonymous assessments of the value of the feedback they received. The responses were overwhelmingly positive. One student wrote that he/she usually didn’t look too carefully at exam feedback, but that doing so in this case was extremely helpful and clarified how he/she might study more effectively for the next exam. Another student wrote that it was encouraging and helpful that I acknowledged strengths in his/her performance as well as weaknesses. It’s not surprising that students would value specific comments that highlight what they did well and what they didn’t do well so that they can make adjustments to the way they approach future exams.

The bottom line is that giving more specific (both positive and negative) feedback on essay exams worked very well, and the students reported its value to me explicitly, but it takes significantly more time than making quick notations about missing or incorrect information. I think this exercise will affect the balance of compromises I make between the time available for grading and my ideal professional goals in the future. It taught me that this type of feedback on essay exams is probably more important than I thought.