Monday, January 06, 2014

Measuring learning: identify effective courses through data analysis.

Higher education faces serious challenges, not the least of which is how to provide compelling evidence that the learning attained is worth the cost, particularly since a number of studies suggest learning is often less impressive than many colleges might wish to portray. There is a growing interest in more effective and efficient educational practices and companies that claim to provide it. Lurking In the background is a “college ranking” industry that rarely uses objective measures of student learning. Altogether, this raises the question, what would evidence-based higher education look like? 

We became interested in how to improve educational outcomes as professors of molecular biology and chemistry, respectively. We, and others, have come to the conclusion that there is a need to change both the way students are taught and what they are taught. Lecturing alone, an anti-Socratic strategy, often presents a fire-hose of information that is, at best, superficially understood. While students can pass exams, the extent of their understanding remains problematic, since it is often unclear what the exams used actually measure.

So how to use evidence-based practices to monitor educational outcomes?  Clearly, what is missing is an objective method to characterize what a particular course claims and what it actually delivers in terms of student learning. At the same time, it is critical to recognize that monitoring educational outcomes is a complex task and easily manipulated for a variety of non-educational purposes. Various “accountability” projects, including both the Bush and Obama administrations’ “No Child Left Behind/Race to the Top” schemes, tend to approach assessment in a ham-fisted, one-size-fits-all manner, with the pernicious effect of eroding support for public education.

Our suggestion builds on the growing ubiquity of the digital collection of student assessment data. With access to the questions students are asked, the answers the students generate, and how these answers were graded, which we refer to collectively as a student’s “ed-data”, it is possible to describe what a particular course aims to teach and what it actually delivers. It enables us to distinguish between memorization-based assessments, that monitor who is paying attention, and assessments that require an accurate working understanding of the materials presented. Using student’s ed-data we can, for the first time, describe courses on their own terms rather than through what may be seen as, and often are, irrelevant or arbitrary assessments. We can focus attention on what the instructor or institution deems important. An ed-data-based analysis would reveal what types of answers are acceptable and so provide direct evidence of the rigor of a class. Evaluating evaluations uncovers what a particular course actually values and delivers.      

At this point, the non-academic might object: don’t all courses with a given title, say genetics or chemistry, teach similar content and don’t all students passing such courses learn more or less the same things, together with the critical thinking skills needed to apply that knowledge productively? Sadly, there is little evidence to support this conclusion. A systematic analysis of student’s ed-data would help identify those courses, curricula, or institutions that value and help foster critical thinking and subject mastery.  

Using student’s ed-data raises a practical question.  Who owns this data–the student, the institution, or the instructor?  We would argue that a student’s ed-data, like medical data, is the property of the student or that, at the very least, they have a right to its fair use. For example, it could be used to establish that courses delivered by less prestigious institutions produced levels of subject mastery similar to more expensive and exclusive alternatives. 

We suspect that instructors and their institutions will robustly defend their claims of exclusive ownership over student’s ed-data. The problem is that analysis of ed-data is the only objective and economical way to establish that a course has, or could have, delivered what it purports to teach, that is, what the student paid for. The issue becomes one of a product claim, a claim that, in the educational arena, must surely be based on what has been learned. 

Assuming ownership and use issues can be resolved, we are left with the practical question, who would perform and assume of cost of such analyses?  There are extant commercial entities, and perhaps a start up or two, that may be willing to provide such services. The cost could even be part of an institution’s accreditation process or third party ranking schemes.

The analysis of student ed-data provides a unique opportunity to distinguish educational excellence from mediocrity and outright malpractice. It would enable low-cost education providers to establish their quality, provide an impetus for traditional institutions to improve on their efforts, and establish a valuable criteria by which to evaluate educational enterprises. 


Mike Klymkowsky is a professor of molecular, cellular and developmental biology at the University of Colorado Boulder and co-director of the CU Teach science and math teacher preparation program.  Melanie Cooper is the Lappan-Phillips Professor of Science Education and a professor of chemistry at Michigan State University. Both are fellows of the American Association for Advancement of Science and recipients of the Outstanding Undergraduate Science Teacher Award from the Society for College Science Teachers/National Science Teachers Association.  Many of the ideas expressed here emerged through a workshop on student education data held at ETH Zurich in June 2013.

No comments: