Measuring learning gain in higher education is a complex challenge



How do students learn, how well are they supported in their learning by higher education institutions and how can this be measured?

These are questions that are now being addressed with some urgency in projects funded by the Higher Education Funding Council for England, ahead of “learning gain” becoming one of the criteria to be included within the teaching excellence framework – the UK government’s new system for assessing the quality of university teaching.

My own institution, the University of Leicester, is one of 10 taking part in the first pilot of an initiative that will trial a combination of methodological approaches to measuring learning gain, with the help of undergraduate students.

At first glance, the concept of learning gain in higher education might appear relatively clear-cut: students study for a period of time during which their performance is assessed and their successful completion is marked by the achievement of an award. During that time they will have gained academic knowledge and a range of skills that are defined by the learning outcomes of the award they have achieved.

It takes very little digging, however, to reveal a concept that is highly complex and contested both in its definition and in its measurement.

For example, a simplistic metric for measuring academic attainment might be to compare the input qualifications of a student and their final degree classification. It is immediately obvious, however, that such an approach cannot be used to compare learning journeys: a student entering university with a set of A* grades at A level, who graduated with a first-class degree, would have, apparently, demonstrated less learning gain than someone who also gained a first but entered with three B grades.

Even in terms of the purely academic journey, therefore, there is very limited capacity to compare performance and development, especially as the current degree classification system results in the majority of students gaining “good” degrees.

An increase in the granularity of outcome measures is one of the arguments supporting the introduction of grade point average (GPA), in particular to remove the “cliff-edge” of the 2:1 to  2:2 boundary. Even with increased granularity, though, comparison would be challenging because the algorithms for calculating results vary between institutions, as do the intended learning outcomes of the programmes and the ways in which they are assessed.

Further complexity comes about because each student’s journey through higher education is, and should be, much more than gaining knowledge, including as it does the development of the critical skills required to research, evaluate, interpret and utilise that knowledge to address complex problems.

With the introduction of fees and the increases in associated student debt, there is also a strong driver to support students in developing the skills that will enable their success in an increasingly competitive graduate employment market. On this basis, Hefce gives its definition of measuring learning gain as “broadly it is an attempt to measure the improvement in knowledge, skills, work-readiness and personal development made by students during their time spent in higher education”.

This definition takes on significance for higher education providers because learning gain is to become part of the TEF. Currently the only core metrics in the TEF under the heading of “Student Outcomes and Learning Gain” are the data from the Destinations of Leavers from Higher Education (DLHE) survey, measuring the proportions of graduates in employment, skilled employment or further study which, although benchmarked, are recognised as being a poor proxy for learning gain.

Hefce is exploring this challenge through supporting a series of 13 projects distributed across various universities and involving a variety of methodologies, as well as a longitudinal study involving 10 universities. The National Mixed Methodology Learning Gain Project will investigate students’ development of their critical thinking and problem-solving skills, their attitudes towards their study experience and their level of engagement with their studies.

In the next few years, therefore, we can expect to see the introduction into the TEF of a more evaluative assessment which, hopefully, will provide a better proxy of learning gain.

Higher education providers will focus their attention on the measure as they do on all such metrics and will, no doubt, change aspects of their practice to improve their performance against that measure. Some, at least, of those changes will probably lead to improvements in the educational experience of the students.

To avoid hubris, though, we need to remember that each student’s learning journey through higher education involves far more than that which we can deliver through degree programmes and organised extra-curricular activities. Michael Moffat’s observations from his 1989 book, Coming of Age in New Jersey: College and American Culture, still ring true: “At least half of college was what went on outside the classroom, among the students, with no adults around.”

Author Bio: Professor Jon Scott is pro-vice-chancellor with special responsibility for student experience at the University of Leicester.