The term audit trail is shorthand. i use it to describe “evidential” material that you provide for a reader. I am a bit suspicious of the overuse of the word evidence, and I prefer “audit” because it describes what actually happens. “Audit” signals the work that your additional material has to do.
Because readers want to understand what you have done, with what/who, how, and why, you have to provide some stuff which shows them. I call this material an audit because it is similar to the independent checking of accounts that organisations routinely do, and audit trail because your additional stuff provides information about your process, not simply the end results.
Thesis examiners and article reviewers are by definition particularly interested in audit. A key part of what they do is to make sure that the research they are reading, your research, is trustworthy.
Making sure that your research is trustworthy means that the reader, and especially the reviewer/examiner, looks for particular clues in the text. They are looking for material that tells them the researcher (you, me) has been thorough, thoughtful and has done their very best to be ethical and scrupulous in the generation and management of their research. Examiner/reviewers looking for clues read a text with an eye on whether: (1) the research sits within, adapts, or innovates, a particular approach which is spelled out,. (2) the consequences of using particular tools or research design are acknowledged, and (3) decisions made during the research are identified and explained.
So what exactly do examiners and reviewers look for? Examiner/reviewers often want to see the data itself and the workings that you did. Some examiner/reviewers may see their job as checking your workings for accuracy, others may be looking to see if it is dodgy. But most are looking to be able to tick off a mental box which says “This research has been done well. It stands up to critical scrutiny.” Readers in some disciplines have always checked data and workings; providing data sets and calculations as part of a publication is taken for granted. But other disciplines are also increasingly interested in data and the details of analysis. (But some disciplines don’t hold seeing a lot of data as the way to judge trustworthiness.)
There are some obvious implications of audit trails for writer researchers and some not so clear-cut. The answers to these questions are different for different disciplines and for different research traditions. So it is important to first of all check out the expectations within your discipline, and also expectations in particular publications. Some journals expect a great deal more methodological detail than others. There are also cultural traditions at play in the level of detail that is expected.
However providing the stuff for audit isn’t necessarily straightforward. Here’s a few audit trail issues to consider:
- Research data needs to be clean – ethical commitments and formal approvals generally require us to make sure data is anonymous and confidential. And in some places, Europe and the UK for instance, there is also data legislation which sets boundaries about what can be made public, and under what conditions. But making data clean may mean removing or redacting a lot of material. So much so that providing what’s left may actually defeat the purpose of making it available. Is it always possible to balance the tension between open-ness and privacy? Are there some conditions under which it isn’t desirable or proper to make data available?
- Research data might be highly repetitive. Do examiner/reviewers need to see it all? Would it be enough to show them some of the data and some of the workings? What level of worked data would be sufficient for them to judge trustworthiness?
- Providing lots of detail about research data and analysis might distract from the argument that you are constructing. Where is the best place to put audit material? How much goes into the main text and how much can be provided in Appendices or as supplementary materials?
- Providing auditable materials might also take up a lot of words. Should all of the audit trail be counted in the final word count?
- If the audit trail is both bulky and potentially distracting, when is it acceptable to refer to data sets and workings that are available elsewhere, on the cloud for example? How difficult does this make the reading task for examiners – do they have to continually swap between texts? Is referring to data held elsewhere always possible in blind peer reviewing, when an external data set will clearly identify the author(s) – but can the reviewer do their job without accessing the data?
There are more issues related to audit trails than these. And of course it is not simply reviewers and examiners that are interested in being able to see what has happened in and as research. We always need to think about our obligations to make our reasonings and workings clear to our readers and how to do this ethically and clearly; this is an important aspect of making our work persuasive and authoritative.
This post is an answer to a question. I’m always happy to have a go at answering questions if I can.