There was a conversation on twitter last night that got me kind of worked up. Potnia already posted about this - and there are some really good points about how study section works there. You should read it! But I have opinions, too, and maybe my perspective is a little different. It's very possible I'm putting on the meat pants with this post, but so be it. I'm basing this on my (limited) experience writing grants (I have funded R01) and sitting on study sections (once as ECR, several other times ad-hoc). And I'm only talking NIH here - I have no real experience with NSF or other government agencies.
The twitter conversation was, more-or-less, focused on the frustration of submitting a revised application to NIH (the A1) and getting a worse score than the original submission (the A0), even though you addressed the concerns raised by the original review. This led to Dr. Becca suggesting that there should be a mechanism that kept scores from getting worse. I love Dr. Becca. She is smart and genuinely good people and makes delicious cocktails. I know the frustration of having an A1 get a worse score than the A0. But a rule that A1 always have a better score than the A0 is just not a reasonable solution.
First, there is the real possibility that the A1 is just not as good as the A0. Reviewers have to evaluate the grant that they are reading. Maybe the revisions actually lowered the impact of the grant. It could be that the new experiments (or preliminary data or whatever) were just not compelling. They could "unfocus" the grant, make the central hypothesis harder to understand, etc etc etc.
Second, the A1 may actually be better than the A0, but it still gets a worse score. HOW?? The two things that pop into my head are:
A. There were a lot of other grants in that study section that were just really good. Each round, reviewers are supposed to score grants compared to the other grants they get for that meeting. So even if the A1 is "better" it may not measure up when compared to the other grants the reviewers are reading.
B. Different reviewers scored the grant this time. Even though NIH has standing (or sitting?) members of the study section, there is no guarantee that the same folks read your grant each time. Different people will have different perspectives, background, etc. They will have different views of what are "strengths" and "weaknesses". They will have different approaches to weighing the strengths and weaknesses. It's n0t awesome, but it happens. IME (limited though it is), the folks that reviewed prior versions speak up in study section even if they weren't reviewers this time around.
BUT! If you did what the reviewers wanted, they shouldn't get to give you a worse score!! I disagree (see above). But also, and this is really important, the reviewers don't tell you what to do. They only point out what they consider to be strengths and weaknesses of the application. If the SRO is doing her job, they will make sure that "what they should do" types of statements aren't part of the discussion at study section. Just because a reviewer notes something they consider a "strength" or "weakness" doesn't necessarily mean that it is something that was a major influence of the overall impact score. You can get a sense of what the score "drivers" were by reading the summary statements. Not the bullet points, but the paragraphs written by the chair of the study section (which should reflect the discussion about your grant) and/or the summary paragraphs written by each reviewer to justify the impact score they assigned (especially if your grant was not discussed).
It is sometimes not straightforward to read the summary statements. Because reviewers are limited in what (and how) they can raise concerns. In a perfect world, your PO (NOT the SRO) would help you parse these statements. Or you will get help from an experienced mentor. Because if you misread the summary statements, you may think you are making the grant better by doing what the reviewer wants -- and then you inadvertently make the grant less exciting to that reviewer. It sucks. I joked on twitter that reading summary statements is like trying to translate an ancient prophesy. It's kind of true. But only because you sort of have to know where the reviewers are coming from and be able to speak their language. You only can learn this by practice. I sat on study section as an ECR (Early Career Reviewer) - it was more helpful than I can explain. I learned how other folks read grants. I experienced how discussions go, and how that translates into scores. And I have been lucky that I have great colleagues that let me read their reviews and that have read mine and given me advice. I'm lucky (so fucking lucky) that I have these resources. I searched them out, because a lot of really smart people (like DrugMonkey) made it clear to me it was important.
Funding lines are low. And when they are low, you need a great score to get funded. And you have to get lucky to get a great score. The reviewers need to align more precisely than the stars. So you need to do everything you can to get the reviewers on your side. That, in my mind, what "grantsmanship" is. It's sort of like learning a new language and the customs/rituals of a weird culture. But all you can do is try to make it as easy as possible for the reviewers to be on your side. And then you have to get lucky, too.