The ten habits of highly unsuccessful research bid writers

Share:

\"report_writing\"

I’ve just reviewed a lot, and I mean a lot, of research bids. I review research bids regularly, as do a lot of senior academics. Some of them are great and some of them are decent, sensible and worth doing. But more of them could be like this. I’m always pretty shocked when I get bids where the basics haven’t been attended to. I know that there are more issues than are here on my list, but honestly, I’d be a pretty happy reviewer if all the bids I get/have to read attended to the following ten things.

1. The reader/reviewer can’t easily find out what the bid is about. The title is obscure. There is nothing on the first page which states what the researcher is going to do. The reader/reviewer has to search for the aims and objectives and/or the research questions. They have to wade through a lot of justification and background before they can begin to piece together what the proposed project is about.

2. There is no warrant for the project. The reader/reviewer does not know why this project should be funded and done. Telling the reviewer we don’t know anything about the topic is no substitute for a cogent explanation about why we should bother to understand more about the topic. The bid writer might care – but why should I?

3. The research design is cursory.
• There are either no details of the methodological tradition in which the research is based, or there is a tedious extended essay about epistemology and methodology.
• There are no details. The reader/reviewer hasn’t got a clue about what is going to happen, to whom or what, how many/how often, when and in what order. What is actually to be funded? This isn’t a trust-me-and-give-me-the-money exercise.
• There are problems with the research question. There are too many questions, and there is probably a bigger question trying to get out. There is no connection between the research questions and the data generation methods or there are several questions in several places in the text none of which are linked to the data generation methods or there are several work packages none of which apparently connect together or relate to the questions.
• There is nothing said about analysis or naming a piece of software is given as a substitute for providing any details about what the writer/researcher will actually do once they’ve generated all that data.
• There is no discussion about ethics or there’s simply a statement that the university has an ethics committee or that the project adheres to ethical standards set by a learned society. This assurance is expected to stand in for an economical discussion of any pertinent ethical concerns and it assumes that the reader/reviewer actually knows the university/learned society standards and processes.

4. The budget is unbelievable. It is either inflated or the project is under-budgeted. There is a big list of equipment that any university already has. There is a huge amount allocated for international travel and conferences which seems to suggest that the researcher is seeking to fund everything they desire in the next three years.

5. The research fellow is expected to do all the work. Readers/reviewers often take a particularly dim view of this. There is nothing said about how the PI will support the researcher, nothing about their development or issues of quality control. It looks like exploitation of the staff member, and it probably is.

6. The researcher doesn’t have the track record for the project. There’s no problem with a bit of a step-up in scale or going in a new direction, if it’s acknowledged. The problem arises when the researcher doesn’t address any training or mentoring arrangements that will allow them to make the jump from single researcher to leading a large project team. Alternatively, the researcher doesn’t have the expertise for the project and has produced no evidence they know how to get it

7. The project doesn’t fit the call or the scheme. The researcher hasn’t done their homework and hasn’t found out what the aims of the call or scheme actually are. Lazy.

8. The literature review is inadequate.
The scholarly contribution is thus unclear. This is because:
• the researcher has only cited their own work. Presumably they work in a vacuum or nobody else’s work is worth a damn.
• the researcher has written an essay on every available bit of literature they’ve found. The reader/reviewer hasn’t got a clue which of the literatures are more significant and which are going to be used and/or challenged.
• the researcher has left out some of the key texts in the field. This is either because they don’t know them or they don’t want to acknowledge this body of work. Either way the reader/reviewer is left wondering about the scholarship of the researcher/bidder.

The following two aren’t as critical, but they can make the difference between being funded or not.

9. The bid is as dull as ditch water. There is no indication that the researcher is enthusiastic about the project. No, it’s more than dull. It’s badly written and tedious. The reader/reviewer isn’t convinced that the researcher will be able to communicate anything to anybody.

10. There is an inadequate communications plan. The researcher seems set on only presenting their results to a few mates, a peer reviewed journal and a select academic conference. They haven’t grasped that if scarce public money is to be spent on their project then they do need to make some effort to tell said public what they’ve done, and what happened as a result.

Tags: