If you're in education, you know that we've been working in a variety of ways to "assess" student learning for something like two decades now. Before that, we graded students. It wasn't perfect, of course, but it was there. We assumed that a student who got an A on an exam had demonstrated that they'd learned the material being tested (and hoped that exam grades weren't primarily a matter of being good at bubble tests or whatever).
But, assessment folks have told us forever that grades aren't the same as assessment, but that assessment has to be "outcome driven" and such. And the assessment people told us that what's important is that we all use the information to tell us how we're doing as teachers, and to help us improve.
And we needed to assess programs and not just how students do in individual courses. So we couldn't assume that a straight A math student was succeeding at learning math. We had to set goals, and then figure out outcomes that would reflect those goals, and then figure out a product that would demonstrate the outcomes that would reflect the goals.
Let me tell you about some of the history of our Underwater Basketweaving Program's assessment adventure.
In Underwater Basketweaving, they set some program goals. For example, they want students to understand the structure and context of underwater-woven baskets.
Some years ago, the idea was that they'd look at the baskets the students wove underwater and the papers they wrote about baskets woven underwater in a sort of portfolio. But then someone would have to collect the portfolios, and someone would have to read the portfolios, and there was no money to pay someone to do this, since the expensive assessment guru was too busy making forms for departments to fill out to actually do any assessment himself.
So portfolio assessment went away, and the department struggled for several years to meet the ever-moving targets and to do the paperwork the assessment guru set for them.
Finally, the assessment guru said that they had to do a grid for their department goals, and even though they started with 25 goals for their students, they quickly realized that they couldn't fill out paperwork for 25 goals, not without making paperwork the primary job rather than teaching and research and weaving underwater.
So in line with the assessment guru's rules, they had to make an outcome to go with the main goal, which was that all students should be able to describe the structure and context of an underwater-woven basket, since you have to be able to measure understanding in some way other than "student nods knowingly."
And then they decided that in one of the early courses in the major, students would do an assignment where they'd describe the structure and context of an underwater-woven basket.
So the guru suggested they make a grid and turn in the numbers this way: for each goal, they'd have three possibilities: the student exceeded the expectations of the outcome, the student met the expectations, or the student didn't meet the expectations. And then someone would look at the assignments and fill in a separate grid piece for each of the outcomes. And to make it easy, they'd rotate the outcomes, one a year, so that someone would only have to fill in one of the grids after reading all the papers for these sections.
But again, there was no money to pay people to look at the assignments for assessment.
So the assessment guru said that each faculty member should be responsible for filling out the grid for their section.
And the faculty met, and noticed that they were already grading these assignments, so how about if they found a way to translate grades into the assessment grid. So they decided that As would be exceeding expectations, Bs and Cs would be meeting expectations, and Ds and Fs would be not meeting expectations. And when the assignment came in, they dutifully tabulated the grade information into the assessment grid.
Last week, the assessment gurulings in the department gave the department assessment feedback (called "closing the loop" in assessmentese). And here's what they found:
58% of students were exceeding expectations
36% of students were meeting expectations
6% of students weren't meeting expectations.
You would think this would be pretty good news, right? That department must be doing something really good to have so many students meeting and exceeding expectations! Let's give them a raise!
You would be wrong.
No. The department was chastised. They obviously had low expectations if so many students were exceeding those expectations (besides, you know, grade inflation?)
They needed to change the way they do assessment so that the numbers will be more in line with what the assessment guru says they should be.
Let's consider: is there a way to tell if the department really is doing a great job preparing students (who are also, perhaps, dedicated, smart, and hard-working)?
In the new competitive world here, the department will be competing with other departments based, in part, on assessment data. Isn't it reasonable to think that all departments will have, perhaps, inflated numbers? Or perhaps everyone is doing a really good job (how could you tell?)
In the new competitive world here, we're working hard to retain students, so we put a lot of effort into helping students learn stuff and demonstrate that learning. Gone are the days of more than half of students in an intro chem course getting Cs or below based on a brutal curve. Instead, we expect chem teachers and departments to work some amazing magic, and they seem to do it.
In the new, brutally competitive world here, a world where the unions are broken and tenure is more imagined than real, are departments being asked to do a sort of confessional move to help prepare people for firings to come? (I'm listening to a book on tape of a history of modern China, and it sounds more like some of the brutal movements which led to a lot of people suffering and dying because some cadre or other forced them to confess and then beat or executed them.)
It's not hard to imagine our crazy administrator saying, "look, that department [say, philosophy] I don't respect has self-confessed and shown that their students are only meeting expectations, so let's can the major, keep adjuncts to teach some GE courses, and use canning the major based on bad assessment as a good reason to fire those pesky professors who ask hard questions about ethics and such during meetings."