Let’s Talk CIHR (part 4)

Jim Woodgett
7 min readJun 11, 2019

Links to CIHR Let’s Talk Part 1, Part 2 and Part 3.

This is the last input I intend to submit and it concerns the critical Project grant competition. There are two other survey topics on knowledge translation and support of early career investigators that others are much better placed to address. Both are important questions and early career investigators, in particular, remain extremely vulnerable in the current volatility of an under resourced system.

The Project grant competition forms the primary conduit for CIHR support of health research in Canada. It was born out of the rubble of the old Open Operating Grant Program (OOGP) which was blown up during the “Reform Ages”. That was when the usual biannual OOGP became, for two consecutive years, a single annual competition with one being labelled the “transitional” OOGP. This morphed into Project Scheme 1.0, an unmitigated disaster that managed to simultaneously miss all of its projected goals and assumptions while spraying the small funds available around like a Raptors t-shirt gun. This humunculus of an experiment brought out the worst of reviewers too, with dereliction of duty being rampant once the pressures of not looking like a jerk to your peers was relieved thanks to “virtual review”. Fortunately, the tide was reversed after the second competition when there was a return to face to face panels and more sensible (less obsessively structured) rules for application content and format: Project Grants 2.0. Two years out the remnants of the Reform Age have largely been buried though we still bear the burden of the Common CV as a timely reminder of the legacy of untested experiments*.

With the pending slow death of the Foundation grant program, the Projects will become the de facto “RO1” equivalent of CIHR, its principle vehicle to adjudicate the best research across the spectrum, twice a year. It does not wear this challege particularly well. In the last competition (Fall submission, 2018), there were 371 grants funded out of 2,484 applications. Those fortunate to make the cut were given the honour of forfeiting 23.5% of their approved budgets in across the board cuts. The average term was just under 4.5 years. The current (Spring 2019) competition has 2,447 applications so application pressure has stabilized. Nonetheless, at a 14% success rate, this is too low to avoid stochastic selection of who is funded — especially when panels tends to clump competitive grants into a similar scoring range. A view of historic data confirms the slide.

The inexorable decline of success rates of CIHR grants from 2000–2015 (via Michael Hoffman: https://twitter.com/michaelhoffman/status/1135538355607920640 from an RTF on this page: https://open.canada.ca/data/en/dataset/af589454-caf5-4b6f-86ed-c871567c61de ). Note the dead cat bounce at the right is where two discontemporaneous Foundation grant competitions were nailed on. Pillars represent: 1, biomedical; 2, clinical; 3, health systems; 4, population health.

The solution to this quagmire is to double-down on investment into the Project competition and to consider consolidating strategic initiatives into a similar structure (with ad-hoc expert panels). A universal basic funding mechanism (not basic in the type of research but in commonality of structure) would go a long way to saving administrative funds, level playing fields and ensuring consistent performance.

As an aside, one of the newer changes to grant adjudication is increased use of “streamlining”. The rationale is that there is no point discussing applications that fall into the lower half as judged by reviewers as these have very little chance of scrambling into the thin air of the funded range. An effect, though, is that panels can be finished in their on-site review task within an hour or so of a second day. Contrast this with NSERC when gallant panelists travail for 4 to 5 days, moving between expert panels. It seems there is an opportunity here. Reviewers still have a substantive review load (usually 9–10 grants) but their on-site time is poorly used.

I’d also note that in the good old days when success rates were above 20% the President or VP of Research of CIHR would make time to visit the panels, to discuss issues and to thank the reviewers. That stopped around 2007.

Here are my specific suggestions and answers to the survey:

The Project Grant Competition does not have pre-defined limits on the duration of a grant or the budget size of a grant. This has historically been in place to ensure that all types of health research can effectively be supported by the program. What advice do you have for CIHR on grant duration or grant budgets in the Project Grant Competition in order to effectively balance funding all types of research grants with supporting an appropriate number of health researchers?

The Project grant competition is the key plank of CIHR programs and remains seriously under resourced. In practice, grants are 3–5 years in duration, with the majority at 5 years. This is appropriate. Reducing term will simply increase application pressure. It would be beneficial to remove the across the board cuts and perhaps move to a more modular budget. There is also not much evidence that increasing a grant term beyond 5 years is effective. While this removes some reviewing, the idea behind Foundation grants was to fund a program, not a singular project. Since that experiment is ending, it does make sense to eliminate the concept that a Project cannot be renewed. Some science can certainly be conducted in this fashion (and is the norm) but much research is longer term in nature. Competitive renewal (combined with new applications) keeps the level of competition very high.

The Project Grant Competition peer-review process relies on panels that review proposals in 47 areas of science. What advice do you have for CIHR on the current peer-review process and complement of panels? Are there areas of science or types of research that you feel are not appropriately reviewed within the current complement of panels?

I would guess grant submitters who have not been recently funded will likely have a few beefs here but the fact is that few grants receive unfair treatment. However, there is insufficient funding to support all of the highly meritorious applications. I like the rule that reviewers serve a maximum of 3 years (6 competitions) if they are deemed competent and that there is ongoing turnover. There may be some areas of research that are under served but that should be reviewed each year. Indeed, I would add all RFP adjudication to the Projects via ad-hoc panels with the same success rates.

CIHR is dedicated to ensuring there are no systematic biases within its review processes while maintaining a focus on scientific excellence. What advice do you have for CIHR related to the goal of upholding equity and fairness in the review system?

The on-going training for reviewers is reasonable though the best adjudication and checks of privilege occur at the panels themselves. The Chairs and SOs (and other reviewers) should not tolerate and immediately correct any evidence of bias. The way CIHR treats these incidents leaves much to be desired. Maintaining and publishing statistics on the competitions is a good way of tracking results and telegraphing expectations. CIHR should also realize that they, as an agency, cannot eliminate systematic bias, despite the best intentions. Hence, data can be used to make post-hoc adjustments if necessary.

What else would you like to tell CIHR related to the Investigator Initiated Research Program and review processes?

Champion it. Build on it. While there are natural structures and inclinations within the agency to engage in top-down research (and this is appropriate in some cases), the low success rates associated with the Project grants are a gaping wound on the side of CIHR. Efforts to chip away at this fundamental program sends exactly the wrong message. The way to build strategic programs is to meld them with the Project grant adjudication system, with similar standards. The return of face to face review has injected some stability but there is room for improvement and constant innovation. However, the research community is still reeling from the aborted reforms and needs confidence in the primary health research agency. Please don’t deprecate Project grants in any way.

And that, as they say, is that. You have until June 28th to submit any feedback (see here: https://letstalk-cihr.ca). Whether the format is helpful or not, it’s hard to complain if you don’t at least engage. There will be some “consensus workshops” in September and a health research summit in December (ahh, December in Ottawa) and then the strategic plan will be rolled out in June 2020. I’ll end with a parting word. CIHR has been through a tumultuous period in the past 5 years. The previous strategic plan from (2014–2019) did nothing to mitigate the utter disaster of the Age of Reforms. Strategic plans signal intent, a road map of where the priorities will lie. Its structure does not necessarily begat function. After all, the Scientific Directors and other administrators have a large say in implementation and dealing with the realities of the world. The previous plan also consulted widely and, presumably, that feedback was included somewhere. Let’s hope that should significant directional changes be embraced that these have the support and confidence of the research community and are carefully considered rather than being imposed.

  • A last word on the Common CV. Part of the frustration is that every. single. field. has. to. be. individually. entered. because the CCV is based on a structured database. So that means the various fields must be used in some sort of collective sorting mechanism, right? Wrong. We have an out of date relational database that is used to derived unmodifiable PDFs full of whitespace. It is in desperate need to be shuttled off this mortal coil.

Links to CIHR Let’s Talk Part 1, Part 2 and Part 3.

--

--

Jim Woodgett

Toronto researcher working on diabetes, stem cells, cancer & neuroscience. 140 chars are my own pithy but open access thought-lets.