One of the talks at TAM 6 focused on "niche pseudoscience". Steve Cuno talked about how you can investigate pseudoscience in your particular specialty, in his case advertising. That made me think about pseudoscience in education.
Here I won't talk about any specific beliefs or pseudosciences. Rather I will just mention an overall attitude about teaching methods. It is very difficult to determine what works in education. Overwhelmingly, when you talk to educators about what they do, they will mention some technique they use that works or doesn't work. The problem comes in when you ask how they know if it works or not. Usually, it is just a feeling. It seemed to work. You seemed to connect with students. They were excited about it. The problem for anyone familiar with the nature of evidence is that this is all subjective. It is also prone to confirmation bias. We notice the students that it did work with but we fail to see those students that we lost (or sometimes the other way around).
Chances are most techniques work well with some students and less with others. We are likely to see the examples where it works and either fail to see or ignore those where it does not. Which students are most likely going to give us feedback? And of course a bigger problem is that a student's subjective feeling does not necessarily mean it really produced the outcomes we desire.
A slightly better way to assess outcomes is with student surveys, and probably most educators use these. If they are anonymous and you get all students, they are likely to be a little more representative, although students still might change what they write to please those who will read it, just as they do in all surveys. And again, these are only the student's subjective feelings of how they learned.
The best way to determine what works is research, and it is the attitude of many in the sciences towards this research that I would most like to discuss. There is research on education. One thing it tells us, fortunately, is that student surveys are usually fairly accurate assessments of actual student learning, so some of the above techniques are at least somewhat useful. However, when a professor gets poor student evaluations, chances are he/she will dismiss them and say that they are only low because he was too hard of a grader or similar justifications. When they have actual data, they can ignore it.
There is extensive research that shows that a pure lecture format does not work well for most educational outcomes. Some level of active learning works much better. This is a robust result of many different studies. The details of what does work can be murky and this does not mean that lectures have no role to play, it just means that 100% lecture is not the best way to go. I have known professors who completely dismiss all of this research as some kind of touchy-feely pandering.
What I find unscientific is this attitude towards research, especially among science teachers. Scientists should know that personal experience is not reliable. Scientists should know that research is better. Now I fully agree that the level of research in education isn't always as good as in other areas. So if someone had actually read some studies and found specific methodological flaws in the research, I would have no problem with them dismissing it. But that is not what I often see. Rather there is the feeling that they know what works best in teaching and they aren't going to have somebody from an education department tell them what works and what doesn't. They aren't going to worry about student evaluations, since they know what works and those studies showing evaluations are accurate can be ignored. After all, 5 of the 100 students they have tell them they are doing a great job.
I admit I use just a general feeling that something works, just like every one else. The truth is that with education, often that's all we have. I try to use student evaluations. For me the best assessment would come from students five years after I have them, when the sometimes subtle effects of education can truly be seen. What I do not approve is wholesale rejection of the research that has been done without any familiarity with it at all. Education is difficult to measure, but as scientists we should not have a double standard, rejecting personal experience in one field but embracing it another or requiring data in our fields but ignoring data in education.
Friday, June 27, 2008
Subscribe to:
Post Comments (Atom)

2 comments:
I definitely agree that there is a strong double-standard when talking to science teachers, especially those of the self-proclaimed skeptical sort. The question is how to engage in meaningful discussion with someone about what works and doesn't work and why, getting beyond that "just a feeling" argument. What other non-subjective ways could we use in our evaluations?
There are at least some established methods. As I said, student evaluations have been shown to be fairly reliable. Some teachers claim that you can buy good evaluations with easy grades, but the evidence does not support that. Standardized tests are also a way to get objective results. I said that surveys of graduates would be great, but the problem with that is the difficulty in getting a significant random sample to respond.
To evaluate specific techniques is more difficult. Ideally we would have two parallel classes, one with and one without the technique. That is hard to do in most settings and it is difficult to make sure all other variables are held constant. If I just try a single new lab or an exercise in class, I am not sure how to evaluate it objectively. Student feedback, regardless of its biases, seems to be the main way most of us have for smaller changes. We should at least try to systematically collect such feedback.
Post a Comment