Gerty goes to study section, part 2

Jun 14 2012 Published by under academia, grants, on the job training

In this installment, I'm going to talk about what I did BEFORE I even got to study section. Obviously, everything I say (ever) is specific to my own experience. YMMV, etc. If you want to read about how I ended up as an ECR, see part 1.

After I was invited to study section, the real work started. I had to actually review grants! As an ECR (what a convenient acronym!), I had a reduced load. I only had to review about 30% as many grants as the "real" members of study section. At NIH, grants all get reviewed by 3-4 different people, who are cleverly named the "primary", "secondary", "third" and "fourth" reviewer*.  All reviewers that were new to this study section had an "official" training conference call with the SRO to make sure we were all on the same page. We talked about the focus of the study section, the scoring protocol, different grant mechanisms, and how to deal with ESI/NI applications (there was no difference in these two categories, as far as I could tell). We also all filled out the conflict-of-interest form. The NIH had identified folks you may have a conflict with (you work at same institution, have recently collaborated, etc), but you are expected to note any others that may be perceived as a COI.

Then, the scoring. NIH grants are scored for five criteria: Significance, Approach, Innovation, Investigator, and Environment. You can tell they are important because they are always written as proper nouns. Each criteria gets scored on a scale of 1-9 (PDF of scale),  1 is "exceptional", 9 is "poor". And then you list bullet points of strengths and weaknesses - that should justify the score you gave, of course! If you list a lot of weaknesses it will be hard to defend a "3", for example. In order to prevent score compression, you are encouraged to consider a 5 as average. We were all told that if we score something better or worse than 5 we should be able to justify why. In addition to the criterion scores, each grant gets an overall Impact score. The impact score is the Big Deal. There is no formula that derives the Impact score from the criterion scores. Instead, you have to decide what are the most important aspects of the application (both plus and minus). Then you decide what the potential for the proposed research to have a long-lasting and significant effect on the field is. This is the Impact score. Different factors contribute to the Impact score for each application. Some get lifted by the Approach, others by the Significance, and some by Investigator. At least one reviewer writes a paragraph explaining the Impact score that they assigned, so that the grant writer knows what was driving the score.

The point of the ECR program, at least in part, is to groom us young-uns to be good reviewers. Therefore, even though I was third reviewer for all of my grants, I did a full write up for all of them, including paragraph justifying my Impact score. I was expected to have mine done well before everyone else, so that the SRO could look at them. I was NOT given any direction on what to say, but ze looked at my reviews to give me feedback on whether the comments I made were consistent with the scores I was giving, and if I was going into enough/too much detail in each section. I actually found this really, really helpful. I have no personal experience with the new scoring system. The last NIH grant I wrote (my K99/R00) was still from the 25-page format and old scoring procedure.

It was surprisingly hard to give out bad scores! But I do what I gotta do. For each of my grants, I did some literature searches and read up so that I had an understanding of the field. Obviously, all of the grants were in my Field, but some were outside of my own sub-field expertise. This was important to help me figure out what the "impact" of the research might be [HINT: spell out what the "impact" will be when you are writing a grant!!!!] In addition to "my" grants, I read the Aims page of all the grants we would be reviewing (THIS IS WHY YOUR AIMS PAGE IS SO IMPORTANT). Then there were a couple other grants that I read because they sounded interesting, were by folks I knew, or were somewhat related to what I did.

A week or so before the meeting, final reviews are due. After all the reviews are in, you can see what the other reviewers thought of all the grants (except those you have a conflict with). I was Freaking EcstaticTM when my scores were - more or less - in line with what others had to say about the grants. It was also interesting to see what others picked up on, or missed. And it was REALLY interesting how different factors were weighted to come up with the Impact score for each grant. Sometimes, the Approach was the most important thing, other times the Investigator score  was really important, etc. But...these are the preliminary scores! Everything can change at the meeting.... stay tuned for part 3.

(How am I doing with the cliff-hanger here?)

 

*Don't ask me, I don't make the rules. Maybe, "tertiary" and "quarternary" are hard to say.

13 responses so far

  • Comrade PhysioProf says:

    Actually, all assigned reviewers are required to write the Overall Impact explanatory paragraph. And the official terms for the various assigned reviewers commonly in use are Primary, Secondary (there are usually two secondary reviewers, referred to as Secondary 1 and Secondary 2), and Discussant (discussants are only required to assign criterion scores and write the Overall Impact paragraph, but not write the criterion bullet points).

  • Daimia says:

    This is funny because my PI just had the "grant reviewing" discussion with me about the points you discussed in paragraph 3 (not 2 hours ago).

  • drugmonkey says:

    Pay-Pay's "official" terms may or may not be so but my experience has been that "Discussant" is used differently from third reviewer. Oftentimes there is a third reviewer that simply provides another critique like anyone else. The "Discussant" term was always used to indicate that this person did not have to do a full critique but provided more of an overview opinion. Or sometimes a *specific* opinion on a limited part of the application.

    thinks may have changed in the new bulletpoint review era, however.

  • Bashir says:

    I think it's very important you explain the disappointing lack of the word "tertiary".

    Also, were these all R01s or did you have some other types?

  • drugmonkey says:

    What kind of idiotic SRO uses "Secondary 2" instead of tertiary, pee-pee?

  • Isabel says:

    "The NIH had identified folks you may have a conflict with (you work at same institution, have recently collaborated, etc), but you are expected to note any others that may be perceived as a COI."

    funny how it's all people you would likely want to help. But rivals or other enemies and their trainees are not considered?? Not officially, maybe, but are people able to be any more unbiased in those negative cases?

    To go off on a paranoid tangent, seems like it could be a common way to find out specifics of what's going on in a rival's lab long before publication, especially if the grants are targeted to the subfields to review. And more dangerous than reviewing papers by rivals (and delaying publication or whatever) because the work isn't finished yet. Though maybe less common, as it sounds like grants are not targeted to reviewers quite as specifically as manuscripts are.

    • gerty-z says:

      That is definitely paranoid. First, I don't see the issue by avoiding real or perceived conflicts of interest. Do you actually think it would make anyone happy if grant panels gave great scores to close colleagues or collaborators of members of the panel. You can't objectively review these grants, because if they do well then YOU do well. Second, grants are targeted by the PI to a specific study section...so they should know their audience when they write it. I don't see how this is not "specific". Third, reviewing (both grants and papers) is not about helping or hurting the author. Most scientists, IME, try to really evaluate the science based on the data. We are all people, though, and therefore not perfect. And yes, there are probably examples of folks that try to sink grants or "steal ideas". I would argue this is rare, and that generally other panel members call out this behavior. I think you veer far into hyperbole when you argue that reviewers are either "dangerous" or "trying to help".

      • Isabel says:

        Okay, if all that is true, why worry about COI? As in collaborators or people from the same institution? Isn't that why you declare a conflict of interest in the first place? That you won't be objective when it comes to your own collaborators? Maybe I am misunderstanding this. Is it because you need to be anonymous and may be in touch with those people then?

        I am just curious anyway, intentionally coming up with paranoid possibilities 🙂 I am sure most people take reviewing seriously.

        And by targeting specifically, I meant the other direction, that papers seem to be reviewed by people from the specific subfield more often than grants (so far in my NSF experience anyway).

        • gerty-z says:

          I guess it comes down to the idea that, even if you want to be objective, it is really hard to be critical of yourself. Presumably, you have intellectual overlap with your collaborators and may have been involved in some aspects of the grant in question. Similarly, you don't judge folks from your home MRU because awards are given to an institution, not the individual PI. There are probably lots of times that these situations don't actually make it hard for your to be objective. But making basic rules like these allow for everyone to avoid the perception of bias.

Leave a Reply