Tuesday, May 06, 2014

Goals, Outcomes, and Madness

In the ever constant race to update and revise and be prepared for our next departmental review, we're doing these projects to make our courses "outcomes based."   For every course, we're supposed to have goals (which, as I understand it, are supposed to have to do with what we want students to learn), outcomes (which, as I understand it, are supposed to have to do with what students produce to show that they've reached the goals), and sub-outcomes (which, as I understand it, is what students actually produce to show they've reached bits and pieces of the bigger goals and outcomes).  We're supposed to have between 1 and 4 goals, 1 and 4 outcomes (one for each goal), and for each outcome, 1-3 sub-outcomes.

I want to brainstorm a bit here, because I find this sort of stuff intensely frustrating and not fun, and I don't think I find it nearly as helpful as the edu folks keep promising I should.

Let's take, say, an intro Shakespeare course, a genre-based Shakespeare course.

What should the goals be?

1.  I want students to get better at reading difficult texts.

2.  I want students to get a sense of how damned good Shakespeare is.  (I think of this as building in an understanding of some plays, how genre works, how imagery and such work, how character development works, and on and on.)

3.  I want students to get a sense of the historic, cultural, and social contexts of early modern theater.  (Yes, that's huge.  HUGE!)

Do those goals make sense?  Are there better goals?  Are there more goals?

Edited to add:  There should be something about writing in there, maybe in the first one, reading and writing about difficult texts?


  1. Anonymous6:08 AM

    Under #1, you're then supposed to list the components of reading difficult texts. #2 looks like you have covered. #3 You can probably just split the subs into: Understand the history of early modern theater, Understand the culture... etc.

    And remember that at some point they may come back and ask you how you're going to *evaluate* that the students have gotten these bullets (not yet, they say, but somewhere down the line, they say). We've been told that testing isn't enough, though every item on your tests etc. does have to map back to one of your learning outcomes (otherwise why test on it!), but that portfolios are wonderful (blech), and we're allowed to use surveys about how the students *feel* too.

    We've also been told that there's starting to be a lot of push-back on this fad, and it may be on the wane, but on the wane means it'll probably be gone from our lives in 10 years...after the state legislature adopts it and pushes on it hard-core.

  2. N&M have it right, I think (and sound more directly experienced than I with all of this).

    I do have some mostly-secondhand experience (there are a few advantages to a 4/4, no-service position, which include mostly not being involved in designing this stuff; on the other hand, not being involved in the planning, but being involved in the execution of what others have planned, can be *very* frustrating, which is why I end up volunteering to participate sometimes), and, from that perspective, I'd say you're on the right track. I'd also point out that some of the things that make N&M say "blech" (and probably rightly so, from their disciplinary perspective) work very well for lit types -- e.g. portfolios can contain papers and/or similar projects, which you're assigning already. You can, indeed, also test students' feelings (or, in my experience, awareness -- e.g. of concepts) via surveys or similar. The problem with that is that students don't take surveys very seriously, and it's entirely possible for a significant class of the class to claim to be completely unaware of something on which you spent considerable class time (e.g. our school's accreditation-related initiative). That's frustrating, but/and probably says more about the measuring instrument than what's actually going on with the students.

    Mostly, I think, you need to work backwards: figure out what you're already doing (and is working well to meet your goals for the course, even if they're implicit), then describe it in the required format. If you can possibly help it, don't add assignments (or, if you have to, make them useful, and try to incorporate them into something you do already -- e.g. one question on a final exam). Written reflections on what students learned in the class seem to go over well with evaluators, and can often be incorporated into exams. I suspect an exam question in which students are asked to apply skills they've learned in class to novel materials (a literary passage and/or contextual material) would work. Do make the assessment/evaluation worth something toward the students' grades, or you run the risk of their blowing it off.

    You probably have a statistical/assessment office. If they're good, they can be very helpful, both in designing assessments and in figuring out how to efficiently evaluate them (e.g. by generating a random sample from a class list). If they're not so good, well, that will add to the frustration.

    I'd also start talking early about compensating people for performing the assessment work -- perhaps trickier if it counts as service, but, still, it's a significant addition to the work load (one of many explanations for why faculty work seems to be getting harder and harder). Even if you don't actually get money, such discussions may have the effect of scaling back the degree of complication in the project (on the other hand, it may lead your interlocutors to push for more mechanical forms of assessment -- which probably fit better with their expertise/job description, which includes designing mechanically-scorable assessments, but not helping to design/asses more substantive ones -- but that's when you argue that that's not how the kind of skills you're teaching can be effectively assessed. In other words, play all the angles/cards -- budget, time, authentic assessment, etc., etc.).

    1. P.S. I'd also agree that this sort of thing may be (slowly) on the wane, in part because it is, in fact, quite expensive -- one of many reasons spending on administration in higher ed -- and ed generally -- has risen so fast and so much. Of course, there are also whole blocs of the industrial/educational complex -- test/curriculum publishers, universities with grad programs in higher ed administration -- that have significant financial/institutional stakes in perpetuating it, so there may be significant pushback.

  3. Those seem like good goals to me. I'd rephrase #2 to something like "Develop understanding of Shakespeare's work in relationship to standard literary genres, techniques, and style" and then you have the subpoints worked in. I'm not sure if you want to say something about Shakespeare's importance in that goal or not--that's what I'm kinda assuming is why you want them to understand how good Shakespeare is, but I don't know if it needs to be in the goal.

    These are big enough it doesn't seem to me you need more goals. And these are the sorts of things that strike me as really straightforward to assess through traditional (or non-traditional) assignments. I usually put my course goals and outcomes on a little chart for students so that they can see how each part of the class is meant to allow them to improve, or me to assess their improvement. So the chart usually looks something like (only in an actual table, but I'm lazy and don't want to try and figure out how to do that in a comment):
    Goal 1: Close reading skills - short responses - textual analysis paper
    Goal 2: Awareness of connections between literary texts - class discussion - comparative paper
    Goal 3: Historical knowledge and relationship to literary texts - class lectures and readings - historical context presentation and final research paper

    I stole this chart thing from one of my undergrad professors who had them on his syllabi, and I found it most useful when I was first starting to teach because it made me think about every assignment and activity I was having my students do, and if I had to decide whether it was a good activity or not, I asked myself if it helped hit one of the core goals, and if it didn't, then I didn't do it. I still make one of these charts every time I design a new class, but after I've been teaching a class for a while they start to disappear from my thinking.

    1. Anonymous9:27 AM

      Every time we have one of these meetings (and I think we have been having them about once every two months for at least 4, maybe 5 or 6 years now...) we always have to separate between what's useful for the State and University and Accrediting agency, and what's actually useful for our students in terms of Learning Outcomes. It's generally pretty depressing because often we'll do something stupid to check a box because doing it the right way would add a lot more time and hassle and might not check the box and would only be incrementally better than what we're doing already. (This is especially difficult for us as Social Scientists because we're always having these discussions where someone can explain exactly why what we're supposed to do is heavily flawed. But then the chair says we gotta do it anyway. Because. Boxes. Need. To. Be. Checked.)

      We did have the learning outcomes on our syllabi for required core courses already because of Uni and Department Policy many years back. That made a lot of sense-- we need consistency across professors and across years and we need to know when in the sequence important things are being taught. Turning the outcomes from paragraphs into bullet points in order to check a box... All that did was kill trees.

  4. I don't have anything much to add except that in my fuzzy-brained and glasses-less state this morning I thought you were writing about "goats, outcomes, and madness." Incorporating goats and madness into the curriculum = a win.

    1. The meetings would be way more interesting, for sure!