Equality or quality? Measuring the effect of more uni students

Share:

\"\"

Quality in education is something that seems so obvious – until you try to define it.

This week the new Australian Minister for Tertiary Education, Skills, Science and Research Chris Bowen said that “the quality of a course should be measured by the capabilities that students have acquired by the time they complete their course, not the capabilities they have when they begin.”

He went on to defend the government’s aim to get more and more people from disadvantaged backgrounds going to university, rejecting claims that uncapped enrolments would adversely affect the quality of a university education.

The minister is correct on this score but it is also a myth to say that educational quality can be measured reliably.

Measuring the unmeasurable

Nonetheless, the government is proposing three main ways to measure the quality of university education, as it expands access.

The first is the University Experience Survey (UES), which will ask students how satisfied they are with their studies. The student will be asked to rate things such as academic challenge, student engagement and student/staff interactions.

The test has been designed by a consortium led by the Australian Council for Educational Research. They and their partners have unquestioned experience in designing and administering surveys such as this. Nonetheless, the survey will not actually be measuring educational quality, but rather “customer” satisfaction – one small aspect of quality.

The second survey will be an Australian Graduate Survey (AGS), which has been around in various forms since the early 1970s. It will collect hard data such as graduates’ employment and salary outcomes, but like the UES, also has a strong focus on graduate perceptions of course quality and overall satisfaction with the course. Furthermore, the Reference Group making the recommendations adopted by the government warned that very high response rates would be required in order for the data to be reliable and valid.

The final measurement will be an “Employer Survey”. The pilot will happen this year so details are not yet available. But the government says it will be based on employer satisfaction to ensure that the higher education system “is responsive to labour market and employer needs.”

Interestingly, there is another measurement tool that was piloted by the government; the Collegiate Learning Assessment (CLA). The CLA claims to present realistic problems that require students to think critically and reason analytically in order to solve them. Scores are aggregated to the institutional level, allowing universities to benchmark where they stand and how much progress their students have made.

However, the CLA seems to have been rejected by the government. It has not yet said why, but it is known that the test faced resistance by many universities.

Perceptions or hard data?

So, for the government, a quality university education will be measured by: what the student thought about their experience; what type of employment they found; how much they earn; and how happy the employer is with the graduate.

But most of these measures are subjective, dealing with satisfaction or perception of quality. And the objective data collected is almost exclusively concerned with employment outcomes – an important part of any discussion regarding educational quality but again, only a part.

For example, under these proposals a great faculty, or department, or course, will be invisible if the overall satisfaction with the university is low. Universities with strong programs in “non-professional” disciplines (e.g. art as opposed to law, physics as opposed to civil engineering) will likewise suffer, since measurements of graduate salary and employer satisfaction will be less relevant.

And even where employer satisfaction is relevant, it has been demonstrated that employers tend to rate the best graduates as coming from the institutions which the employers themselves came from – a sociocultural bias that favours the older, more prestigious universities.

Finally, by focusing only on the “quality” of the graduate, rather than the “value add” the university has provided, institutions that disproportionately attract the highest-ranked school leavers might be perceived to be the best, regardless of their actual teaching processes. By contrast, a university that offers excellent support to struggling learners and improves them exponentially may not have this excellence recognised.

Government priorities

The salient lesson here for the government is its similar attempts to measure education quality at the primary and secondary schooling levels – namely its MySchool website and the NAPLAN data that underpins it.

Rather than prioritising student gain (i.e. the extent to which the school helps each student improve from his/her personal baseline), the website is inordinately focused on final scores, which are determined in great part by the socio-economic background of the student.

Consequently, if and when the government includes measurements of student satisfation and outcomes on its MyUniversity website there is a real danger that prospective students will be less rather than more informed about educational quality.

Tags: