I finally finished working on this assessment project. On one level, it was a good learning experience about how assessment folks think.
We used a rubric with three categories, read 40 of the same basic assignment across many sections, and then rated each within the three categories. Then the powers that be shuffled, and we read and rated 40 more that had been done by someone else in the first set. (But the list included more than just our assigned pieces, so that there were 3 screens of 30 each, and a fourth page that was shorter.)
Everything was supposed to be done on line by unique identifying number.
I printed out my list of numbers and wrote things down so I could do the ratings entries in groups.
And that's when the real frustration began,
To enter the rating:
Pull up the list on one tab, and a rating page on another tab.
Copy a number from the list, and click over to the rating page.
Click on "add a rating" to open a new dialog box thing.
Paste in the number.
Click on a drop down box and pick a number. Do it again.
Scroll down so you can see the final category. Click on the drop down box and pick a number.
Click on the save button.
Rinse and repeat.
*** Why use a drop down box rather than just a set of horizontal buttons? Why not make it so that the full dialog box is visible without scrolling? They could save four motions, easily. That may not seem like much, but if you're doing 40, it gets to be.
The final step is to enter that you've done a specific assignment.
Go to the list again.
Click on a button next to the identifying number. That opens a separate dialog box.
In that box, put a check in a "done" box. Hit Save.
The "save" takes you back to the first page of the list. And then, if you're not on the first page, you have to scroll down to the bottom where they have the button to go to the next page (but not, of course, a choice of going to the third or fourth page directly).
***Why not just a click on that first list to say a given identifying number's assignment is done?
I don't think I'm super picky about on line document design, but when you're entering 40 pieces, you notice stuff that makes it less efficient.
And I do think the lack of efficiency reveals that the people who are designing these assessments don't do much pre-testing. And that lack of efficiency thus makes me think that these assessment designers don't think through their assessment stuff, the basics, any better.
I've been through three or four big assessment "pushes" here at NWU so far. It started with portfolios and exit interviews, and then after some years of putting in all this effort to do these, we were told that the assessment folks didn't even look at the data, and we had to do this other thing.
The other thing got done for a couple of years, and then the assessment gurus told us that they didn't look at the data because it's useless, and we had to do this other thing.
And so forth.
There's a pattern of bad planning, no pre-testing of the plans to make sure they'll work, little thinking ahead, and a whole lot of wasted faculty (and in the portfolio days, student) time.
***If I'd designed the document, here's what it would have looked like.
A list with columns for the assessor's name, the unique identifier, the three ratings, and then a box that says "done" and, if you're concerned, a "save" tab that makes it saved, and maybe a final "edit" box so you can make changes just in case. (If really necessary, a "comment" button that opens a place to add a comment.)
On each screen, top and bottom, page arrows and number choices so you can move from page 1 to 4 with one click.
But yeah, I'm just an English professor. It's not like I know anything about texts.
And that task done, I turn back to starting my summer anew! Yesterday, I mowed, harvested a bunch of yellow squash (cut them up and put them in freezer bags for winter crockpot use), and went out with a friend for some biking/paddling, with fresh strawberries for a snack.
Fortunately for us, our chair did all of this for our last accreditation. But he was complaining about exactly the same things. It must be some lousy software company that sells the software to everyone.
ReplyDeleteSo many clicks, so little time--this is one of my pet peeves! Years ago (what am I saying--decades ago!) I had a temporary job testing instruction manuals for a computer company. I sat there being the idiot end user trying to follow directions, and then I reported on my experience to the Powers That Be so they could improve the product. Judging by the outrageous arrangement of programs like the one you describe and others I've encountered, this kind of testing is sorely needed today!
ReplyDeleteWe generally do our assessment exercises in a group, in the same room (bring your own device or the university will supply an ipad), with the person who set up the forms (using third-party software) present, and running the show. That arrangement has its disadvantages (I'd much rather have spent several late-May days outdoors), but it does also have the advantage of the designer (at least the end-point designer) of the system being present, and seeing how well it goes. I think we've also got an unusually good assessment person; she's genuinely interested in helping faculty try to design mandated assessments so that they will cause the least disruption possible, and maybe even yield some useful information (or at least give faculty across departments the chance to talk to each other a bit about matters of mutual interest; that -- and the modest stipend -- is why I attended ours).
ReplyDeleteI would not have any hair left after that exercise - and the first person I saw who was vaguely responsible would be lucky to be alive. That's all.
ReplyDelete