Bear in mind that my comments refer to the Education Systems and Broad Reform panel, which gets postsecondary and adult ed, effective teachers, improving education systems, and evaluation of state and local programs and policies applications. I’ve served on this panel since 2008, and reviewed a few hundred proposals in the process, so that’s one reason to listen to me. On the other hand, I’ve submitted six IES proposals so far (two to this panel, two to partnerships, and two to other panels), none of which was funded, so caveat emptor!
We had an interesting panel debrief this time around, discussing whether the panel is too “negative” in its reviews; word on the street is that we are one of the toughest panels at IES in terms of highly rating proposals. This is due in part to some structural issues; the panel is a bit of a grab-bag in terms of different topics, so we have scholars trained in traditional education programs, as well as in the disciplines, such as economics, public policy, psychology, and sociology. This is one of several reasons why I love serving on this panel, but it also means that there is a wide audience for your proposal. My impression of some other panels is that they are more narrowly focused in terms of reviewer training and scholarship.
Part of the issue is that many of the members of the panel are strong methodologists; anecdotally, I have heard this is not the case for other panels (except for statistics). So proposals can come under withering review, whereas they might squeak through on another panel. Personally, I think this is overall a positive, but it can lead to some compelling ideas not receiving funding due to methods issues.
Partnership proposals go to their own panel, which means that the members are from all sorts of areas, from elementary ed to postsecondary. This means you have to pitch your significance in a way that a broader audience can understand; don’t just pitch it to people in your subfield.
Make sure your idea hits everything the Goal description is asking for (under Recommendations), as well as the specific content area (Postsecondary, etc.). The best way to do this is to structure your proposal with the same headings and subheadings as the RFA. This not only ensures that you don’t miss something vital, but it makes the proposal very easy to read. Most of us are following the RFA as we rate proposals, so we don’t like having to hunt through it to see if you’ve met all the criteria. At a bare minimum, there should be four main sections to proposals for the Education Research RFA: Significance, Methods, Personnel, and Resources. These are the four areas that we assign a rating and write a set of comments as to whether the applicant is strong or weak in each area. New this year is a fifth rating area, Partnership, for applications under the Partnerships RFA.
Be aware that unless your proposal is almost perfect, it typically won’t be funded. So if the RFA asks for something and you can’t do it, then it’s probably not worth applying. Talk to the program officer if you’re in doubt.
You should also be aware that programs officer have zero power over what gets funded, unlike NSF. Funding is determined by panel ratings (see below).
Some issues that I often see, in no particular order:
1. The Exploration Goal is looking for outstanding exploratory analyses, not a fishing expedition. This means something beyond a multiple regression model, but less than a randomized design. Proposing a kitchen sink regression won’t get you very far.
The folks at IES probably won’t like this, but if asked, I would advise a colleague against submitting an Exploration proposal. My second failed proposal was an Exploration Goal; we proposed using propensity scores not to estimate the causal effect, but to reduce bias. The reviewers seemed to dislike this intensely, and we didn’t even make it through triage. After this experience, and listening to the discussion of Exploration Goal proposals during panel meetings, I must admit I am still baffled as to what makes a strong proposal under this goal (strong meaning that most reviewers would give it a good score). Part of this is the RFA; unlike Efficacy guidelines, the Exploration section of the RFA doesn’t offer any specific methodological guidance. Part of this is due to many of the submissions, which read more like poor Efficacy proposals than anything else. So I personally would be leery about submitting a proposal when the it’s not clear what you need to do to succeed.
All too often, proposals here seem to ignore the RFA. People seem to be viewing this goal as an “anything but Goal 2-5.”
If you submit an Exploration proposal, what seems to get positive comments are projects that a) are looking at a relatively unexplored but important area, AND b) will result in findings that will generate a lot of hypotheses for the field, and it is obvious that other scholars will be able to build a research program on the results AND c) someone could clearly use as a basis to design a Development or Efficacy project.
A strong qualitative component can be very important, but it must fit with the quantitative component, and together, lead to compelling results. Many of the qualitative components of Exploration proposals read like hasty add-ons.
2. Power analyses are crucial, when required by the RFA. If you can’t provide a reasonable estimate of what your minimum sample size will be, and its power, then you are already dead in the water. I am astonished by the number of applications I see with either no power analysis, or an absurd power analysis (power < .80 or MDES > .20). It is very important that you provide some context for your effect sizes. If you power to MDES = .30, does this make sense for your project? For example, such a strong effect may make sense for a curriculum intervention on student test scores, but probably not for the effect of a principal intervention on student test scores. Use the literature to show what the expected effect size should be for your project. There are also meta-analyses published by Bloom and others that show what the typical effect sizes are in educational research with randomized designs. Reviewers will often try to replicate your power analyses on their own.
3. For Development proposals, your development plan needs to be more than a figure with a circle of arrows – you should think long and hard about potential issues that might occur during the development stage and how you will handle them and change the intervention. There should also be a strong qualitative component here; with the rushed development cycle and small sample sizes, doing a quantitative study can be problematic, except for the pilot study.
A good proposal here will have a detailed theory of action, showing how the components of the intervention work together to change the outcome. Each component should be tracked and measured during the development process, so that it can be tweaked if things are not working as intended. For the pilot study, it makes sense to look at both proximal and distal outcomes, if possible. Sometimes the time required to affect the distal outcomes is outside of the development cycle.
Importantly, you must have a finished intervention at the end of the project, such that you could hand it off to a teacher or other education personnel and have them implement it. Occasionally, we see Development proposals that appear compelling, but what ends up in the “intervention box” to be handed off to others at the end of the project is very unclear.
4. For Efficacy proposals and Evaluation of State and Local Programs and Policies proposals, to paraphrase Michael Myers’ Scotsman character from Saturday Night Live, “If it’s not causal, it’s crap!” From what I have seen, the only proposals that get high ratings in these areas are randomized trials and regression discontinuity designs. This is not to say that a panel design will not get funded, but it had better be wonderful. Propensity scores and instrumental variables tend to be met with skepticism, the former because the selection model is usually based on available data rather than theoretically relevant variables, the latter because of the strong assumptions that underlie IV. Either of these could get funded, but you should have a very compelling justification for their use.
5. A lot of people are confused about what a theory of action is. It is not a theoretical framework; it is a theoretical explanation of how the components of your intervention work together to change the targeted outcome. In other words, how does your intervention work?
As an aside, the theory of change graph that is now published in the RFA as an example is laughably underspecified. If your theory of change looks like theirs, you don’t have much of a theory of change.
6. Make sure you have letters of support that show your people/schools/districts, etc, are willing to do exactly what you describe in your application. If you need 5 districts, don’t say you will recruit 5 districts are part of the project. Recruit them before submission and get letters from them.
Some minor stuff:
1. No more than two acronyms in a proposal. We can’t keep them all straight in our head, which necessitates a bunch of flipping back and forth to find the definition, which in turn leads to a cranky reviewer.
2. Same with bolding and italics. Some people believe that bolding, italicizing, underlining, and CAPITALIZING IMPORTANT CONCEPTS will help get their WONDERFUL IDEAS across to the reviewers. All it does it clutter the page and annoy us. (Or at least me. But then again, I am easily annoyed.)
3. Make full use of the Appendices. These give you lots of room for tables and figures. A strong proposal will, among other things, usually use the full 25 pages for the narrative and most of the 15 pages in Appendix B. If you are coming up short, you are probably not providing reviewers with enough detail about what you plan to do.
If I had to sum up my rules for a successful application, they would be:
1. Get the primary reviewers on your side – you want them walking into the hotel conference room ready to tell people what a great application you have and why it should be funded. This means hewing closely to the RFA in terms of structure and the details you provide, written in a professional manner. A disorganized application that does not address major points of the RFA annoys the heck out of us, because it means you didn’t devote enough time to your application, and in turn are wasting our time by forcing us to read it and give it a low rating.
The narrative is 25 pages single-spaced, not including references. Double-space this, include 15 pages of tables and figures and 5 pages of references, and you are in essence writing a 70 page paper. In other words, the amount of work that a successful application requires is equal to writing one or two good journal articles. Plan accordingly.
This amount of effort is also the reason why you should think long and hard before applying, given the rigor demanded by the RFA and the panel. This is not to dissuade you: I am currently 0-6, and may submit my seventh in September (I’m going for the “Biggest Loser” record). But this is not like submitting a journal article to a second-tier journal: you are competing against some of the top educational researchers, economists, and sociologists in the country.
2. Try to think of all the questions the panel might pose when they review your application. I’ve seen some applications with a great idea torpedoed at this stage, as people begin asking the primary reviewers questions about what you are doing, and you haven’t provided enough detail for them to answer those questions.
3. Understand that the reviewers usually don’t care about particular problems with your idea that prevent you from complying with major aspects of the RFA. We are told to rate proposals against an ideal proposal, which by definition meets all the criteria.
An example may make this more clear. Suppose you have an idea to test an intervention at your college, but for some reason you can’t get a large enough sample size for decent power. The solution is not to say, “My power is less than .80, but it’s not my fault, it’s due to the location of this particular intervention/policy, so please fund me anyway.” The solution is either a) find another institution, or b) find another intervention.
Outline of the review process:
1. IES sends 8-10 proposals to each member of the panel. Each proposal is read and rated by 2-3 reviewers.
The 2-3 reviewers assign a rating to the four (or five) main areas, and then an overall rating for the proposal.
The panel members tend to be the top education researchers in the country. Most people don’t realize that many members of the panel hold appointments in departments of economics or sociology, not education. So the methodological hurdles are high – most members are strong methodologists.
2. Proposals that have a decent overall rating are triaged and sent to the full panel. Usually the average score from the 2-3 reviewers has to be less than 2.75 to make it to the panel, although this can vary. We meet for a day and a half in DC and usually go through about two-dozen proposals.
If your proposal doesn’t make it to the panel meeting for full review, this is an indication that it has serious problems.
3. At the panel meeting, the 2-3 main reviewers explain their reviews and scores to the full panel, and the panel discusses the proposal for about half an hour. You can see the importance of the main reviewers – they serve as the “experts” on your application, and people pay very close attention to their scores and what they have to say about your application. If they are not enthusiastic, then as a general rule the panel will not be either.
4. All panel members then rate the proposal in the four/five subareas and also provide an overall rating.
If your panel overall average score is 2.0 or less (on a scale of 1-5, where 1 is outstanding and 5 is poor), then you will generally be funded. This does depend on the level of funding. For the past two years, proposals just under 2.0 were not funded due to the amount of available funds.
In my experience, only a handful of proposals get a rating below 2.0. (We never see the overall panel ratings, this is just my guess based on reviewer scores and the tenor of the panel discussions.)
5. I don’t have the actual numbers, but my gut feeling is that many successful proposals have to be submitted twice.
If your score is above 2.5, think long and hard about whether you should resubmit. If you can meet all the criticisms that have been raised, you have a good chance of getting funded. But a surprising number of people think they can resubmit by saying, “Due to X, we can’t do what the panel recommends.” Again, the panel is looking for near-perfect proposals, not excuses.