Laura Bornfreund
Senior Fellow, Early & Elementary Education
Of the winners to receive the most money in last week鈥檚 Investing in Innovation (i3) awards, three promoted 鈥渆arly learning鈥 as one of their priorities. But an analysis of their scores shows that their stated intentions may not line up with what the U.S. Department of Education was looking for. In fact, the scoring itself raises many questions about the reviewers鈥 understanding of how to evaluate an early learning plan.
Our analysis started with two questions: What types of projects did the U.S. Department of Education officials have in mind when they included early learning as a competitive priority for the i3 competition? And, more importantly, did i3 reviewers receive adequate instructions on how to do the scoring — and if so, did they follow them?
The first question is easy to answer, as the requirements were included in the application.
, reviewers were supposed to give one 鈥渃ompetitive priority鈥 point to applicants 鈥渢hat would implement innovative practices, strategies, or programs that are designed to improve educational outcomes for high-need students who are young children (birth through 3rd grade) by enhancing the quality of early learning programs.鈥
The applications had to focus on all three of the following (from the Department of Education鈥檚 Power Point for scale-up reviewers)
The two key words in the instructions are the ones we have bolded above: 鈥渨ould鈥 and 鈥渁nd.鈥 鈥淲ould鈥 implies that officials are more interested in plans for the future 鈥 what would the applicant do with i3 funding to enhance its early learning initiatives? The word 鈥渁nd,鈥 of course, means all three areas must be addressed. It does not say 鈥渙r.鈥
As Early Ed Watch has reported, . So far, the department has made public the applications for those who won 鈥渟cale-up鈥 grants 鈥 grants of up to $50 million to scale-up programs with strong evidence of effectiveness. The three with an early learning focus are the Knowledge is Power Program (KIPP), the Success for All Foundation, and Teach for America. (All three have to come up with matching grants by September 8 to actually receive the federal award.)
Yet highest-rated applicant 鈥 Success for All 鈥 discussed only what it is currently doing, not what it would plan to do with i3 funds in the area of early learning. KIPP discussed a somewhat early-learning focused project, but did not address all three of the focus areas. Actually, the only one that talked about future efforts and addressed all three areas was Teach for America. Early Ed Watch reviewed the competitive priority section of each proposal, and here are the scores we would give.
|
Required Focus |
KIPP |
Success for All |
Teach for America |
|
School Readiness |
No credit |
No credit |
Yes |
|
Milestones and Standards |
No credit |
No credit |
Yes |
|
Alignment, Collaboration and Transitions |
Yes |
No credit |
Yes |
|
Final Score |
0 |
0 |
1 |
So, then let鈥檚 see how the actual reviewers scored the inclusion of the early learning priority in these proposals.
|
|
KIPP |
Success for All |
Teach for America |
|
Reader 1 |
Did not Score |
Yes |
Did not Score |
|
Reader 2 |
No Credit |
Did not Score |
No Credit |
|
Reader 3 |
No Credit |
Did not Score |
Yes |
|
Reader 4 |
Did not Score |
Yes |
Did not Score |
|
Reader 5 |
No Credit |
Yes |
No Credit |
Where are the final scores for the competitive point? We could not add them because we could not find information about the final breakdown on the Department of Education鈥檚 website. the scores. We agree. There鈥檚 no page 鈥 that we could find 鈥 showing the synthesis of the readers鈥 scores.
This omission begs the question: Did these applicants actually earn that competitive point or not? Until we know, we cannot say for sure whether these early learning projects were deemed acceptable in the final review.
Even the readers’ individual scores aren’t what Early Ed Watch expected to see.
Actually, the fact that KIPP received no credit from any reviewer does not surprise us. The said it would use grant funds to support principal development for 35 to 50 new primary schools. So far, according to , only one of eight new schools in the coming year will include pre-k.
provided a clear and detailed discussion of what they plan to do to improve and expand their early childhood education (ECE) initiative, which launched in 2006. The application spoke to each of the three requirements under the 鈥渆arly learning鈥 priority. TFA also included their ECE initiative as part of their evaluation component, the part of the application that describes how they will measure the grant鈥檚 impact. 鈥漈he impact analysis will focus on grades pre-k through five for several reasons,鈥 the application says, 鈥淔irst, there is limited research on the effectiveness of Teach for America at the pre-k level.鈥
Still only one of the reviewers awarded TFA the competitive point. This didn鈥檛 make sense to us so we checked out the reviewers comments. Here鈥檚 what reviewer # 5 had to say, 鈥淭he applicant did not adequately address this competitive preference as the focus of the TFA model is to develop K-12 teachers for high needs students.鈥 Did this reviewer not even read the proposal? On the other hand, reviewer #3 (the one that awarded the point) said this: 鈥淧ages e79 to e81 articulate a clear picture of steps TFA has addressed to meet this preference.鈥 Reader #3 got it right. Reader #5 was out to lunch.
Then, there is the , which according to its application has documented research of effectiveness in elementary schools. And, while Success for All does have a preschool program, Curiosity Corner, that has been previously studied (), it is not a part of SFA鈥檚 proposed work. SFA plans to follow students beginning in kindergarten through the early grades to study the literacy program鈥檚 impact on student achievement in low-achieving schools. Early Ed Watch would like to have seen SFA include pre-k to determine if the Curiosity Corner program has a positive effect on literacy as children transition from pre-k to kindergarten in addition to evaluating the impact of SFA鈥檚 elementary program on children in the early grades. However, since they didn鈥檛 include pre-k, they should not have been awarded the competitive point.
All of this goes to the second question that propelled our analysis: Did the reviewers know how to score the early learning priorities? What instructions were they given?
Were the instructions so ambiguous as to lead to the inconsistencies? Did the reviewers suffer from it a lack of knowledge about what makes a quality early learning proposal? We just don鈥檛 know.
We also have no way of knowing whether this one competitive point, if granted, was a deciding factor in whether these proposals made the cut. The applications and scores for the losing applicants are not yet public.
Regardless, the lack of consistency in scoring leads to uncertainty about fairness in the review process. What else did judges overlook? Were the reviewers actually content experts with deep knowledge about the competitive priorities? What if there were proposals that better addressed the early learning or other competitive priorities that were not selected? And, more generally speaking, what about the sections of the proposals that weren鈥檛 related to early learning鈥 was there anything essential missed in those? We鈥檇 like to hear your concerns too!
The Department of Education has not yet posted the narratives for other applicants with high ratings 鈥 those who submitted applications for 鈥渧alidation鈥 or 鈥渄evelopment鈥 grants. We will be sure to comment on those when they are up. Additionally, Early Ed Watch is interested to read the narratives for and what reviewers had to say about the That information may take a little longer to get so keep checking back.