Thursday, May 10, 2007

Garbage In...

The other day, I was at yet another meeting, this one involving the appropriate reed type and weaving pattern for our underwater basketweaving class. (Not really, but go with me and you'll get the point.) One of our colleagues (the Guru) was responsible for presenting data on the success of the various reed types and patterns in making baskets.

The Guru had given us information on the baskets, which, it seemed, were sort of successful, but not as successful as one might have wished. The purpose of the meeting was to get our feedback about the results: does the "sort of successful but not as successful as one might have wished" result seem to fit with what we see?

Then the questions started: why were there only a few reeds tested, and only a few types? And why didn't the data say something about the basket patterns? What if some patterns were better than others? How could we tell if the data didn't say anything.

The Guru got a little cranky. No, he said, he knew there were problems with the data collection and further problems with the method of analysis, but that wasn't our purpose today. He wanted to know what we thought of the results.

Then the Guru's pals started in. Anyone who focused on the data or methodology was, well, almost unAmerican. Not quite, but definitely uncooperative.

Some people made vague noises, yes, we thought the baskets weren't as good as they might have been. We should make better baskets. And that was that. The Guru left satisfied that his results are full of meaning.

Except they aren't. The input (data) and the methodology for handling the data were so poor that any resemblence of the results to reality is purely accidental. We just paid for a totally useless basketweaving analysis. Further, we were intellectually careless enough not to push against the poor results we were given. So we're going to be stuck hearing about those results again and again for the next year.

I hate it that I didn't push that, but the Guru's pals run things, and I don't. And I made noises. But what I didn't do was outright say that the Guru and his pal are intellectually careless (or unethical), and that at a university, we really should stress intellectual rigor and ethics.

Also, the "results" are so predictable as to be useless. Think about what we do here at the university. Yes, we make some good baskets, but some aren't as good at they should be at the end. There are tons of reasons for the failure to make 100% perfect baskets. Some of those reasons involve the way we make baskets, and some the reed, and some the pattern used.

But even if our baskets were, say, 90% perfection, we'd still say that we're doing pretty well, but not quite as well as we would like to. So frustrating!


  1. That is frustrating.

    There are cases in medicine (i.e. The Term Breech Trial) where bad research methodology (such as asigning pre-labor stillbirths to one group or the other, and then counting the dead baby as a bad outcome (which it certainly is, but isn't related to the mode of delivery because the baby was dead prior to labor) in the final analysis rather than just excluding those patients has lead to research that has changed medical practice.

    I actually think it's unethical to do such poorly designed research because it takes funds from more well designed research. Just like it's really wrong for your colleage to try to make you make a judgement/recommendation based on poorly designed research, especially if he's planning on acting on it (and one would argue, why do the research if you're not planning on acting on it).

  2. MWWAK, I agree that it's unethical. In this case, the research is only to be used very locally, in our basket making, but it's still unethical, and intellectually lazy. I suppose I should be grateful that we're not likely to kill anyone with our baskets, eh?