Showing posts with label cognitive impact. Show all posts
Showing posts with label cognitive impact. Show all posts

Sunday, January 5, 2014

Can the Cognitive Impact of Calculus Courses Be Enhanced: Response to Dubinsky

Some blog followers might be interested in a post "Can the Cognitive Impact of Calculus Courses Be Enhanced: Response to Dubinsky." The abstract reads:

**************************************************
ABSTRACT: In response to my post "Can the Cognitive Impact of Calculus Courses be Enhanced?" [Hake (2013)] at http://bit.ly/1loHgC4 (2.7 MB), Ed Dubinsky (2013) at http://bit.ly/JRB9Km of the RUME (Research in Undergraduate Math Education) list made t6 points that I have abbreviated below and to which I respond in this post:

1. "I agree with most of what you wrote."

2. "As far as I know, 'UME Trends' has not been archived. . . . the articles are historically important because they represent a major turning point in the mathematical community towards undergraduate education."

3. "I think more has gone on in MER during the last 20 years than is indicated in your piece. There is a special interest group SIGMAA ON RUME, which stands for Special Interest Group of the MAA on Research in Undergraduate Mathematics Education."

4. "I think we who work in RUME must acknowledge our debt to Physics Education Research."

5. "I am not quite as enthusiastic about CCI as you are. . . . . How are you going to keep the educational community from using CCI to 'teach to the test' and even to cheat?"

6. "I wonder what you have to say about the C4L calculus reform project that we developed at Purdue and was funded by the NSF Calculus Reform movement?"
**************************************************

To access the complete 152 kB post please click on http://bit.ly/1iHGJOL.

Richard Hake, Emeritus Professor of Physics, Indiana University; LINKS TO: Academia http://bit.ly/a8ixxm; Articles http://bit.ly/a6M5y0; Blog http://bit.ly/9yGsXh; Facebook http://on.fb.me/XI7EKm; GooglePlus http://bit.ly/KwZ6mE; Google Scholar http://bit.ly/Wz2FP3; Linked In http://linkd.in/14uycpW; Research Gate http://bit.ly/1fJiSwB; Socratic Dialogue Inducing (SDI) Labs http://bit.ly/9nGd3M; Twitter http://bit.ly/juvd52.

Thursday, October 31, 2013

Why Do Colleges Tie Academic Careers To Winning the Approval of Teenagers?

Some blog followers might be interested in a recent post “Why Do Colleges Tie Academic Careers To Winning the Approval of Teenagers?” [Hake (2013)]. The abstract reads:

**********************************************
ABSTRACT: An insightful critique of the misuse of Student Evaluations of Teaching (SETs) for the evaluation of faculty appeared in the WSJ of 27 Oct 2013 as a piece “When Students Rate Teachers, Standards Drop: Why do colleges tie academic careers to winning the approval of teenagers? Something is seriously amiss” [Asher (2013)] at http://on.wsj.com/17te3oN and copied into the APPENDIX of this post in accord with the Fair Use provision of U.S. Copyright Law.

Coincidentally, on 29 Oct 2013, I received an email from an assistant professor “X” who fears that he will be denied tenure because the Chair of his department, winner of numerous teaching awards [probably based solely on superior student teaching evaluations (SETs)] thinks X’s SETs are inferior and therefore that X is an inferior teacher. I surmise that since the Chair’s superior SETs, taken at face value, indicate that he, himself, is a superior teacher, a judgment with which he doubtless concurs, the Chair is prone to regard SETs as definitive evidence of teaching effectiveness. But the references in this post indicate that (a) teaching awards do not necessarily equate with teaching effectiveness; (b) SETs are not valid gauges of the cognitive (as opposed to the affective) impact of courses; and (c) SETs should not be used to evaluate faculty.
**********************************************

To access the complete 78 kB post please click on http://bit.ly/1bHiHwp.

Richard Hake, Emeritus Professor of Physics, Indiana University. LINKS TO: Academia http://bit.ly/a8ixxm; Articles http://bit.ly/a6M5y0; Blog http://bit.ly/9yGsXh; Facebook http://on.fb.me/XI7EKm; GooglePlus http://bit.ly/KwZ6mE; Google Scholar http://bit.ly/Wz2FP3; Linked In http://linkd.in/14uycpW; Research Gate http://bit.ly/1fJiSwB; Socratic Dialogue Inducing (SDI) Labs http://bit.ly/9nGd3M; Twitter http://bit.ly/juvd52.

“Physics educators have led the way in developing and using objective tests to compare student learning gains in different types of courses, and chemists, biologists, and others are now developing similar instruments. These tests provide convincing evidence that students assimilate new knowledge more effectively in courses including active, inquiry-based, and collaborative learning, assisted by information technology, than in traditional courses.” - Wood & Gentile (2003) at http://bit.ly/SyhOvL .


REFERENCES [URL shortened by http://bit.ly/ and accessed on 31 Oct. 2013.]
Hake, R.R. 2013. “Why Do Colleges Tie Academic Careers To Winning the Approval of Teenagers?” online on the OPEN! AERA-L archives http://bit.ly/1bHiHwp. Post 31 Oct 2013 18:56:04-0400 to AERA-L and NetGold. The abstract and link to the complete post are being transmitted to various discussion lists.






Monday, November 26, 2012

The Value of Student Evaluations of Teaching

Some blog followers might be interested in a recent post “The Value of Student Evaluations of Teaching” [Hake (2012)]. The abstract reads:

*******************************************
ABSTRACT: Bill Brescia of the DrEd list wrote: “We are looking for best practices to solicit student feedback on the teaching quality of individual faculty members where in some cases a fairly large number of faculty are involved in a single course. What mechanism do you use to survey the students?”

The value of student evaluations is a hotly contested topic, witness the 496,000 hits at http://bit.ly/TqEilE generated by a Google search for “Student Evaluations” (with the quotes) on 26 Nov 2012 12:58-0800

Judging from my own experience, and after careful consideration of the above hits, my opinion is that:

a. Student Evaluations of Teaching (SET’s) are useful for gauging the affective impact of teaching, but are worse than useless for gauging the cognitive impact - see e.g. “Student Evaluations of Teaching Are Not Valid Gauges of Teaching Performance - Yet Again!” [Hake (2012)] at http://bit.ly/KGK687.

b. The cognitive impact of teaching is best measured by average normalized pre-to-post-test gains on "Concept Inventories" http://en.wikipedia.org/wiki/Concept_inventory - see e.g., “The Impact of Concept Inventories On Physics Education and It’s Relevance For Engineering Education” [Hake (2011)] at http://bit.ly/nmPY8F.
*******************************************

To access the complete 6 kB post please click on http://bit.ly/XX7OUX.

Richard Hake, Emeritus Professor of Physics, Indiana University
Links to Articles: http://bit.ly/a6M5y0
Links to Socratic Dialogue Inducing (SDI) Labs: http://bit.ly/9nGd3M
Academia: http://bit.ly/a8ixxm
Blog: http://bit.ly/9yGsXh
GooglePlus: http://bit.ly/KwZ6mE
Twitter: http://bit.ly/juvd52

REFERENCES [URL shortened by http://bit.ly/ and accessed on 26 Nov 2012.]
Hake, R.R. 2012. “The Value of Student Evaluations of Teaching,” online on the OPEN! AERA-L archives at http://bit.ly/XX7OUX. Post of 26 Nov 2012 13:35:35-0800 to AERA-L and Net-Gold. The abstract and link to the complete post are being transmitted to several discussion lists.