Last November, I wrote about ways on influencing behavior in my statistics class, and issued a challenge to the esteemed Paul Hebert to respond. Being the mensch that Paul is, he responded with a series of posts that you can read about here and here. I took Paul's suggestions to heart and not only raised the grading scale, but also asked students about their grade expectations and how they were going to achieve it. The grading scale was published in the syllabus, and a short survey was to be completed and submitted within the first two weeks of class asking the following:
1. What specific grade is your goal for the class?
2. List 3 action steps you will take to achieve that grade
3. At what time and location will you study and work on statistics homework? Be as specific as possible.
We are in the second week of the semester, and I repeated the above exercise with my 2 sections of statistics class. As I pored over the surveys I received I decided to look back and see how students met their expectations from the previous semester.
In section A:
11 students stated their specific grade goal was an "A" (4.0)
5 students stated "A or AB" (which I counted as 3.75, though we do not offer an A/AB grade)
13 students stated "AB" (3.5)
2 students stated "AB or B" (3.25)
3 students stated "B" (3.0)
none listed a grade below a "B"
The average expected grade was thus a 3.63
Did students in this section meet or exceed their goal?
3 exceeded their grade expectations
4 met their grade expectations
27 fell below their grade expectations (with 17 receiving a full grade or more below their expectations)
The average grade received was a 2.60
What about section B?
18 students set their goal as an "A"
11 students stated a goal of "AB"
2 students stated a goal of "AB or B"
3 students stated a goal of "B"
Once again, no student listed a grade below a "B"
The average expected grade was 3.71
Once again, did students in this section meet or exceed their goal?
0 students exceeded expectation
12 met expectations
22 fell below expectations (with 13 receiving a full grade or more below their expectations)
The average grade received was a 2.91
What do I glean from this info?
1. Students have really high expectations. In a course such as statistics, where students aren't exactly beating down the door trying to get into a section, I would have thought grade expectations would have been a bit tempered.
2. Are goals and expectations the same? Is the student goal the same as what they expect to receive? Should I have also included a question about what grade they expect to receive?
3. Was an opportunity missed? Like most schools, we don't publish average grades given (let along performance evaluations) of a professor. Students rely on the grapevine to tell them about the quality and difficulty of a class. While I publish and talk about the impact of missing class and completing homework on grades, I have never published the overall average grade of the class. Would having that information have led to students lowering their expectations?
As mentioned, students were once again required to state their grade expectations for the semester. What is the breakdown across the two sections?
30 stated their goal was an "A"
9 stated their goal was an "A or AB"
12 stated their goal was "AB"
3 stated their goal was a "B"
1 stated his/her goal was a "C"
In sum, 75% expected to earn an AB or better, with the average grade expected a 3.75.
If this semester was anything like the last, many are in for a rude awakening.
- 10 comments • Category: classroom, expectations, performance
- Share on Twitter, Facebook, Delicious, Digg, Reddit
10 comments
Great stuff.
A couple of thoughts/questions/ideas.
First of all - regardless of the data from the specific student surveys - overall were the grades higher on average than in those sessions where the survey wasn't done? In other words. Asking the question about setting goals increase or decrease overall grade averages in the class?
Second... is this class the first experience for the students to statistics? Do they have any clue what it is about and how tough it can be? Accurate goal setting requires some experience in the area in which you're setting goals.
Third... how was progress toward their goal communicated? Were they reminded regularly about their goal and the steps needed to achieve it? One of the things I mentioned early in the initial posts was to show that attendance was a predictive variable for their grades. How was that presented to them.?
Lastly - publishing previous averages would have a huge impact on their expectations. However, I would publish them with caveats - something like:
90% of students received their expected grade when they did all the homework and attended 90% of classes. 100% of those that missed more than 2 classes were 2 full letter grades below their planned grade.
In other words - communicate their actions have impact.
Unfortunately, without sharing historical data we all fall victim to the "Lake Wobegone Effect" - where everyone is above average."
by Paul Hebert on September 10, 2010 at 9:01 AM. #
Paul,
Thanks for the well thought out response.
1) Last semester was the first time I had used the survey. I also had the lowest grades in 3+ years. Some of it may be moderated by the introduction of a new textbook as well as the higher grading scale.
2) The pre-requisites for the class are having algebra or pre-calculus. I do not know whether or not they may have had exposure to statistics in high school.
3/4) In the syllabus, I published the following:
Students often ask, “what does it take to be successful in BUAD 284?” Here is what I see as an excellent strategy to succeed.
1. Attend class regularly. Over the past two semesters,
*The average grade on a 4.0 scale for zero absences: 3.75
*Average grade for 1 absence: 3.58
*Average grade for 2 absences: 3.22
*Average grade for 3 absences: 3.17
*Average grade for 4 absences: 2.96
*Average grade for 5 absences: 2.81
*Average grade for 6+ absences: 2.07
2. Do the homework problems assigned. On one exam last semester, students who scored the equivalent of 21 or higher on their chapter homework averaged an A, while those who scored the equivalent of 16 or lower (or did not turn in their homework) averaged a CD.
So students had previous data on how attendance and homework grades impacted performance. I also used to me with students twice in the semester - once in the first two weeks to address any initial questions they had, and at the halfway mark to discuss midterm grades. However, I did not have the time to do so last semester. Students still receive constant feedback in terms of grades on exams and homework (which are returned in a timely fashion -- within 2 class days from when the assignment was completed) for which they can measure how they are progressing toward their goal.
I have also chosen an exam (around the 6th or 7th week) and showed the relationship between exam score and attendance impacted the grade (i.e., students with 0 absences averaged a xxx on the exam, those with 2 absences averaged xxx; those with an 80% on the chapter homework scored xxx on the exam).
Thanks again for the feedback
by Matthew Stollak on September 10, 2010 at 11:05 AM. #
Curious if you could "normalize" the grading to see if in fact setting a goal without the increase in criteria had any effect.
Is there any way to do a control group in the future?
It would seem that setting goals was counterproductive - which would also seem to invalidate a ton of research outside the classroom.
This is why this stuff is fun... and maddening at the same time. The grades should have improved given the structure but they didn't. Obviously, we're not working with a constant population as classes change each time but overall it should have produced a bit more effort and therefore more results.
From a purely non-scientific standpoint, were there any perceivable differences in comprehension based on class participation? Were they getting more of the concepts even if they were getting the answers incorrect on tests?
You've got me extremely curious here...
by Paul Hebert on September 10, 2010 at 11:19 AM. #
This is very interesting; I'm enjoying following the comments.
by Robin Schooling on September 10, 2010 at 9:03 PM. #
For most students and many professors, grades may be the one and only relevant measure of successful or unsuccessful learning and achievement. There may be other goals/expectations, along with grades, assessed at the beginning of a class; What did the student wish to learn and why? What would he or she consider the relevance of the material to be learned in the course to their present and future life as a daily consumer of news and information provided by various media including by their government, or as an employee/employer in their chosen profession? At the end of the class, they could be asked whether these expectations were met? I would predict that those who initially perceived the material in a course as relevant for their present and future life would expect to receive (and would actually receive) higher grades than those who were enrolled solely because it was required and perceived no utility in taking it. I would also predict that those who initially enrolled solely because it was required but did, at its conclusion, see the material's relevance and value in their present and/or future life would also receive higher grades than initially expected.
by Anonymous on September 11, 2010 at 7:46 AM. #
@Paul - Good questions. I made a number of changes from the fall semester to the spring semester that might limit my ability to draw conclusions about the results.
Changes were:
1. Introduction of the survey - students in previous semesters were not queried about their goals nor asked to commit to studying at certain times.
2. Introduction of a tougher grading scale (per your suggestion).
3. Elimination of points off for absences (per your suggestion). In prior semesters, students would lose 10 points per absence for any absence incurred after 4. I eliminated this.
4. Required office hours with students. In prior semesters, I would have met with a student during the 3rd week of class to make sure I knew his/her name, and gauge how they are feeling about the class. Then in the 8th/9th week, I would have a midterm meeting where I would give them their grade as if the semester ended on that day. They would receive a piece of paper detailing how they were doing on exams, homework assignments, number of absences, etc.). This was a significant commitment of time (up to 5 minutes with each student and I had upwards of 100 students a semester), and due to time constraints last semester, I simply couldn't fit it in.
5. I taught an overload section. Due to some staffing problems, I had to teach an additional section of a class. That might have impacted the time devoted to students in other classes.
6. Introduction of a new textbook. Perhaps the new textbook might have affected learning (i.e., different questions; different examples), but, means are means. Standard deviations are standard deviations. Correlations are correlations. The material covered did not change one iota.
7. Introduction of a new method of collecting homework. In previous semesters, I would randomly collect four homework assignments (out of the 13 possible chapters covered) and grade them simply on completeness. Were they making a good faith effort to complete the problems.
With the introduction of the new textbook came a new homework assignment approach. McGraw Hill has a software program called Connect (http://connect.mcgraw-hill.com/connectweb/branding/en_US/default/html/instructor/index.html) where I could assign problems and students could submit them answers online. There were two types of problems - static (where all students have the same numbers and worded problem) and algorithmic (where there were upwards of 20 versions of the same worded problem but with different numbers). In addition, students could check their work along the way to see if they were getting the right answer.
My hope was that with this new approach, students would be getting both more, and much better feedback than my simple random approach from previous semesters. It did. I saw a significant increase in students coming by my office during office hours seeking assistance. Performance should have thus improved.
One problem - the software had a number of glitches....particularly early on in the semester. Answers on at least 5 of the problems were wrong, which, coupled with a student struggling to grasp the material, made for unhappy customers. Add to the fact that the software went down for maintenance for 4 hours on the night before a chapter assignment was due, and I had a near riot on my hands.
In the end, there had to be differences in comprehension, which I will hypothesize was due to the new textbook and homework collection approach. What I thought would turn out positive had the opposite effect.
I also saw my teacher evaluations take a serious nosedive. I have taught this course 16 times since the spring of 2005, and only 1 section out of 16 was lower than the ratings I received for these two sections. I had been .20-.30 points higher than the college average until last semester. Frustrating.
There is certainly a way to do a control group in the future. As I always teach two sections of the class, one section could receive the survey and set goals, and the other one would not receive it.
by Matthew Stollak on September 13, 2010 at 3:09 PM. #
I went back and checked the initial goals for this experiment.
There was one: How to influence behavior to decrease the number of classes missed. Attendance was the goal.
How did that work out in the new structure? Did attendance increase?
The reason I ask is that I think I fell into the same trap many clients do... basing success/failure on something that wasn't considered part of the plan. We (you and I) started comparing the plan we
The goal was to get attendance higher. Did the new structure do that?
by Paul Hebert on September 13, 2010 at 5:59 PM. #
Paul,
Results show a slight improvement, if that.
In the fall semester of 2009,
Section A averaged 3.68 absences, with 9 students missing 5 or more classes
Section B averaged 2.43 absences, with 6 students missing 5 or more absences
In the spring semester of 2010,
Section A average 3.26 absences, with 8 students missing 5 or more classes
Section B averaged 2.31 absences, with 5 students missing 5 or more students
I also looked at grades in Spring 2010 since I raised the grading scale. The average grade given in section A was 2.6. Under the old grading scale, the average grade would have been 2.81. 14 students would have earned a higher grade if the old scale was still in place (i.e., a "B" in the current grading scale would have been an "AB" under the old scale).
In Section B, the average grade given was 3.0 (3.2 under the old scale). 14 students would have earned a higher grade if the old scale was still in place
by Matthew Stollak on September 14, 2010 at 2:31 PM. #
It might make sense to keep everything the same but revert back to the old scale.
This is why I don't like studies using college students - they are different than the normal population :)
by Paul Hebert on September 15, 2010 at 10:43 AM. #
Dr. Vincent A is widely regarded as one of the Best cataract surgeon in Texas. With over 20 years of experience, Dr. Vincent A has performed thousands of successful cataract surgeries, earning him a stellar reputation among patients and peers alike. He combines cutting-edge surgical techniques with a compassionate approach, ensuring the highest level of care for each patient.
Dr. Vincent A dedication to excellence and his commitment to staying abreast of the latest advancements in ophthalmology make him a trusted choice for those seeking superior cataract treatment in Texas. His expertise and track record speak volumes about his proficiency in restoring vision and improving quality of life.
by Hill Country Eye on April 10, 2024 at 12:14 AM. #