Saturday, March 19, 2011

Re: Interactive Engagement Typically Lowers Student Evaluations of Teaching?

Some blog followers might be interested in discussion-list post “Re: Interactive Engagement Typically Lowers Student Evaluations of Teaching?” [Hake (2011)].


The abstract reads:


**********************************************************

ABSTRACT: PhysLrnR’s Bill Goffe wrote (paraphrasing): “I thought I recalled reading here that interactive engagement typically lowers student evaluations of teaching, but I've not been able any such claims in the literature.”


Goffe's post initiated a 17-post thread (as of 19 March 15:47-0700) accessible at http://bit.ly/i9zBsd to those who take a few minutes to subscribe to PhysLrnR at http://bit.ly/beuikb.


Bill may have overlooked my post “Re: What if students learn better in a course they don't like?” [Hake (2006)]. Therein I wrote (condensing and paraphrasing):


“When I first started teaching an introductory physics course I followed the example of teaching-award-winning faculty and taught in a traditional manner: passive student lectures, lots of exciting demos, algorithmic problem exams, recipe labs, and a relatively easy final exam. I was gratified to receive a Student Evaluation of Teaching (SET) evaluation point average EPA = 3.38 [B plus on a scale of 1 - 4] for ‘overall evaluation of professor.’ Had I continued using traditional methods and giving easy exams I would doubtless have risen to become the U.S. Secretary of Education, or at least President of Indiana University.


Unfortunately for my academic career, I gradually caught on to the fact that students’ conceptual understanding of physics was not substantively increased by traditional pedagogy. I converted to the ‘Arons Advocated Method’ http://bit.ly/boeQQt of ‘interactive engagement.’ This resulted in average normalized gains g(ave) on ‘Force Concept Inventory’ that ranged from 0.54 to 0.65 as compared to the g(ave) of about 0.2 typically obtained in traditional introductory mechanics courses.


But my EPA’s for ‘overall evaluation of professor,’ sometimes dipped to as low as 1.67 (C-), and never returned to the 3.38 high that I had garnered by using traditional ineffective methods. My department chair and his executive committee, convinced by the likes of Peter Cohen (1981, 1990) that SET’s are valid measures of the cognitive impact of introductory courses, took a very dim view of both my teaching and my educational activities.”

**********************************************************


To access the complete 13 kB post please click on http://bit.ly/gKWO1S.


Richard Hake, Emeritus Professor of Physics, Indiana University

Honorary Member, Curmudgeon Lodge of Deventer, The Netherlands

President, PEdants for Definitive Academic References which Recognize theInvention of the Internet (PEDARRII)


rrhake@earthlink.net>

http://www.physics.indiana.edu/~hake

http://www.physics.indiana.edu/~sdi

http://HakesEdStuff.blogspot.com

http://iub.academia.edu/RichardHake


“Few faculty members have any awareness of the expanding knowledge about learning from psychology and cognitive science. Almost no one in the academy has mastered or used this knowledge base. One of my colleagues observed that if doctors used science the way college teachers do, they would still be trying to heal with leeches.”


- James Duderstadt (2000), President Emeritus and University Professor of Science and Engineering at the University of Michigan


REFERENCES [All URL's shortened by http://bit.ly/ and accessed on 19 March 2011.]


Duderstadt, J.J. 2000. A University for the 21st Century. Univ. of Michigan Press, publisher's information at http://bit.ly/cvJ1yI. Amazon.com information at http://amzn.to/fUnbj5, note the “Look Inside” feature.


Hake, R.R. 2011. “Re: Interactive Engagement Typically Lowers Student Evaluations of Teaching?” online on the OPEN! AERA-L archives at http://bit.ly/gKWO1S. Post of 19 Mar 2011 15:51:49-0700 to AERA-L, Net-Gold, and PhysLrnR. The abstract and link to the complete 13 kB post are also being transmitted to various discussion lists.


3 comments:

Roy Wright said...

I've just posted some comments on the conflict between instructor quality and students' perception of instructor quality, which is illustrated so well by what you've said here.

I'm curious -- do you have any thoughts on how the quality of faculty instruction can best be judged, if not by student evaluations?

Richard Hake said...

Thanks for your comment, Professor Wright.

You ask ". . . . do you have any thoughts on how the quality of faculty instruction can best be judged, if not by student evaluations? "

Thanks for asking. Yes, indeed I do. See e.g. (I'm an HTLM tag dummy - you'll have to copy and paste the URLs into your browser window):

Hake, R.R. 2005. "The Physics Education Reform Effort: A Possible Model for Higher Education?" online as a 100 kB pdf at http://bit.ly/9aicfh; a slightly edited version of the article that was: (a) published in the “National Teaching and Learning Forum" (NTLF) 15(1), December 2005, online to subscribers at http://bit.ly/bvm8Ye (If your institution doesn't subscribe to NTLF, it should); (b) disseminated in "Tomorrow's Professor" Msg. #698 on 14 Feb 2006 archived at http://bit.ly/d09Y8r - type the message number into the slot at the top of the page.

Regards,

Richard Hake

Marcus Wellington said...

It is clearly a very complex task (and extremely difficult, by any conventional means) to establish just how actual student understanding of a concept may be impacted by a change in teaching method. Quite evidently, the measures such as the conventional 'student evaluation of teaching' are gravely flawed, as the instance quoted by Professor Hake shows. (Richard Hake's post is pasted below my signature for easy reference).